From da2da692a07e7e48804bbf992fd4ff1fa026b3ce Mon Sep 17 00:00:00 2001 From: Gaurav Nelson Date: Thu, 2 Nov 2017 15:49:11 +1000 Subject: [PATCH] [dedicated-3.7] changed all instances of emptydir and EmptyDir to emptyDir (cherry picked from commit fdd8a09f70db154f3f883bf29c96a0e79d50d45e) https://github.com/openshift/openshift-docs/pull/6084 --- architecture/additional_concepts/storage.adoc | 14 +- creating_images/guidelines.adoc | 2 +- dev_guide/application_lifecycle/new_app.adoc | 2 +- dev_guide/volumes.adoc | 4 +- getting_started/devpreview_faq.adoc | 4 +- install_config/cluster_metrics.adoc | 849 +++--- install_config/install/advanced_install.adoc | 2278 ++++++++++++----- release_notes/ose_3_1_release_notes.adoc | 2 +- release_notes/ose_3_2_release_notes.adoc | 2 +- using_images/other_images/jenkins.adoc | 2 +- 10 files changed, 2093 insertions(+), 1066 deletions(-) diff --git a/architecture/additional_concepts/storage.adoc b/architecture/additional_concepts/storage.adoc index 82aceefb9010..e59855a460a3 100644 --- a/architecture/additional_concepts/storage.adoc +++ b/architecture/additional_concepts/storage.adoc @@ -342,9 +342,9 @@ ifdef::openshift-dedicated[] ==== * PVs are provisioned with either EBS volumes (AWS) or GCP storage (GCP), depending on where the cluster is provisioned. * Only RWO access mode is applicable, since EBS volumes and GCE Persistent Disks cannot be mounted to multiple nodes. - * *EmptyDir* has the same lifecycle as the pod: - ** *EmptyDir* volumes survive container crashes/restarts. - ** *EmptyDir* volumes are deleted when the pod is deleted. + * *emptyDir* has the same lifecycle as the pod: + ** *emptyDir* volumes survive container crashes/restarts. + ** *emptyDir* volumes are deleted when the pod is deleted. ==== endif::[] @@ -355,12 +355,12 @@ ifdef::openshift-online[] * Only RWO access access mode is applicable, since EBS volumes and GCE Persistent Disks cannot be mounted to to multiple nodes. * Docker volumes are disabled. ** VOLUME directive without a mapped external volume fails to be instantiated. - * *EmptyDir* is restricted to 512 Mi per project (group) per node. + * *emptyDir* is restricted to 512 Mi per project (group) per node. ** If there is a single pod for a project on a particular node, then the pod can consume up to 512 Mi of *emptyDir* storage. ** If there are multiple pods for a project on a particular node, then those pods will share the 512 Mi of *emptyDir* storage. - * *EmptyDir* has the same lifecycle as the pod: - ** *EmptyDir* volumes survive container crashes/restarts. - ** *EmptyDir* volumes are deleted when the pod is deleted. + * *emptyDir* has the same lifecycle as the pod: + ** *emptyDir* volumes survive container crashes/restarts. + ** *emptyDir* volumes are deleted when the pod is deleted. ==== endif::[] diff --git a/creating_images/guidelines.adoc b/creating_images/guidelines.adoc index 44af7ae5f7a1..5e760a433f14 100644 --- a/creating_images/guidelines.adoc +++ b/creating_images/guidelines.adoc @@ -243,7 +243,7 @@ ifdef::openshift-online[] Docker images cannot be built using the `VOLUME` directive in the `DOCKERFILE`. Images using a read/write file system need to use persistent volumes or -`emptydir` volumes instead of local storage. Instead of specifying a volume in +`emptyDir` volumes instead of local storage. Instead of specifying a volume in the Dockerfile, specify a directory for local storage and mount either a persistent volume or `emptyDir` volume to that directory when deploying the pod. endif::[] diff --git a/dev_guide/application_lifecycle/new_app.adoc b/dev_guide/application_lifecycle/new_app.adoc index d769b30ba9ba..b54c41465bcc 100644 --- a/dev_guide/application_lifecycle/new_app.adoc +++ b/dev_guide/application_lifecycle/new_app.adoc @@ -319,7 +319,7 @@ as input to `new-app`, then an image stream is created for that image as well. a|`DeploymentConfig` a|A `DeploymentConfig` is created either to deploy the output of a build, or a -specified image. The `new-app` command creates xref:../volumes.adoc#dev-guide-volumes[*EmptyDir* +specified image. The `new-app` command creates xref:../volumes.adoc#dev-guide-volumes[*emptyDir* volumes] for all Docker volumes that are specified in containers included in the resulting `DeploymentConfig`. diff --git a/dev_guide/volumes.adoc b/dev_guide/volumes.adoc index d3fbafcb3831..65b260086d57 100644 --- a/dev_guide/volumes.adoc +++ b/dev_guide/volumes.adoc @@ -23,14 +23,14 @@ are present, to repair them when possible, {product-title} invokes the `fsck` utility prior to the `mount` utility. This occurs when either adding a volume or updating an existing volume. -The simplest volume type is `EmptyDir`, which is a temporary directory on a +The simplest volume type is `emptyDir`, which is a temporary directory on a single machine. Administrators may also allow you to request a xref:persistent_volumes.adoc#dev-guide-persistent-volumes[persistent volume] that is automatically attached to your pods. [NOTE] ==== -`EmptyDir` volume storage may be restricted by a quota based on the pod's +`emptyDir` volume storage may be restricted by a quota based on the pod's FSGroup, if the FSGroup parameter is enabled by your cluster administrator. ==== diff --git a/getting_started/devpreview_faq.adoc b/getting_started/devpreview_faq.adoc index 3a5a6a5f3bc8..cea583ab9fac 100644 --- a/getting_started/devpreview_faq.adoc +++ b/getting_started/devpreview_faq.adoc @@ -57,7 +57,7 @@ Yes, but with a few caveats. For https://docs.docker.com/engine/security/security/[security reasons], no images that run processes as root are allowed. Additionally, any Dockerfile `VOLUME` instruction must be mounted with either a persistent volume claim (PVC) or an -EmptyDir at this time. See xref:devpreview-current-usage-considerations[more +emptyDir at this time. See xref:devpreview-current-usage-considerations[more considerations]. CAN I RUN PRODUCTION SERVICES ON THE DEVELOPER PREVIEW?:: @@ -135,7 +135,7 @@ pods and containers. secrets (though some amount of these secrets will be needed by the system's build and deployer service accounts). * Any Dockerfile `VOLUME` instruction must be mounted with either a persistent -volume claim (PVC) or an EmptyDir at this time. +volume claim (PVC) or an emptyDir at this time. * The project associated with a user can allocate up to two PVCs of up to 1 GiB each. * No images that run as *root* are allowed. * Only the Source-to-Image (S2I) build strategy is allowed for any build diff --git a/install_config/cluster_metrics.adoc b/install_config/cluster_metrics.adoc index 41b663a7ca48..50433e497e97 100644 --- a/install_config/cluster_metrics.adoc +++ b/install_config/cluster_metrics.adoc @@ -19,7 +19,7 @@ exposes metrics that can be collected and stored in back-ends by link:https://github.com/GoogleCloudPlatform/heapster[Heapster]. As an {product-title} administrator, you can view a cluster's metrics from all -containers and components in one user interface. These metrics are also +containers and components in one user interface. These metrics are also used by xref:../dev_guide/pod_autoscaling.adoc#dev-guide-pod-autoscaling[horizontal pod autoscalers] in order to determine when and how to scale. @@ -27,13 +27,13 @@ This topic describes using link:https://github.com/hawkular/hawkular-metrics[Hawkular Metrics] as a metrics engine which stores the data persistently in a link:http://cassandra.apache.org/[Cassandra] database. When this is configured, -CPU and memory-based metrics are viewable from the {product-title} web console +CPU, memory and network-based metrics are viewable from the {product-title} web console and are available for use by xref:../dev_guide/pod_autoscaling.adoc#dev-guide-pod-autoscaling[horizontal pod autoscalers]. Heapster retrieves a list of all nodes from the master server, then contacts each node individually through the `/stats` endpoint. From there, Heapster -scrapes the metrics for CPU and memory usage, then exports them into Hawkular +scrapes the metrics for CPU, memory and network usage, then exports them into Hawkular Metrics. Browsing individual pods in the web console displays separate sparkline charts @@ -59,135 +59,184 @@ ifdef::openshift-origin[] ==== If your {product-title} installation was originally performed on a version previous to v1.0.8, even if it has since been updated to a newer version, follow -the instructions for node certificates outlined in Updating -Master and Node Certificates. If the node certificate does not contain the IP +the instructions for node certificates outlined in +xref:../install_config/upgrading/manual_upgrades.adoc#manual-updating-master-and-node-certificates[Updating +Master and Node Certificates]. If the node certificate does not contain the IP address of the node, then Heapster will fail to retrieve any metrics. ==== endif::[] -The components for cluster metrics must be deployed to the *openshift-infra* -project. This allows xref:../dev_guide/pod_autoscaling.adoc#dev-guide-pod-autoscaling[horizontal pod -autoscalers] to discover the Heapster service and use it to retrieve metrics -that can be used for autoscaling. +An Ansible playbook is available to deploy and upgrade cluster metrics. You +should familiarize yourself with the +xref:../install_config/install/advanced_install.adoc#install-config-install-advanced-install[Advanced Installation] section. This provides information for prepairing to use Ansible +and includes information about configuration. Parameters are added to the +Ansible inventory file to configure various areas of cluster metrics. -All of the following commands in this topic must be executed under the -*openshift-infra* project. To switch to the *openshift-infra* project: +The following describe the various areas and the parameters that can be added to +the Ansible inventory file in order to modify the defaults: ----- -$ oc project openshift-infra ----- +- xref:../install_config/cluster_metrics.adoc#metrics-namespace[Metrics Project] +- xref:../install_config/cluster_metrics.adoc#metrics-data-storage[Metrics Data Storage] -To enable cluster metrics, you must next configure the following: +[[metrics-namespace]] +== Metrics Project -- xref:../install_config/cluster_metrics.adoc#metrics-service-accounts[Service Accounts] -- xref:../install_config/cluster_metrics.adoc#metrics-data-storage[Metrics Data Storage] -- xref:../install_config/cluster_metrics.adoc#metrics-deployer[Metrics Deployer] +The components for cluster metrics must be deployed to the *openshift-infra* +project in order for autoscaling to work. +xref:../dev_guide/pod_autoscaling.adoc#dev-guide-pod-autoscaling[Horizontal pod +autoscalers] specifically use this project to discover the Heapster service and +use it to retrieve metrics. The metrics project can be changed by adding +`openshift_metrics_project` to the inventory file. -[[metrics-service-accounts]] -== Service Accounts +[[metrics-data-storage]] +== Metrics Data Storage -You must configure xref:../admin_guide/service_accounts.adoc#admin-guide-service-accounts[service accounts] -for: +You can store the metrics data to either +xref:../architecture/additional_concepts/storage.adoc#architecture-additional-concepts-storage[persistent storage] or to +a temporary xref:../dev_guide/volumes.adoc#dev-guide-volumes[pod volume]. -* xref:../install_config/cluster_metrics.adoc#metrics-deployer-service-account[Metrics Deployer] -* xref:../install_config/cluster_metrics.adoc#heapster-service-account[Heapster] +[[metrics-persistent-storage]] +=== Persistent Storage -[[metrics-deployer-service-account]] -=== Metrics Deployer Service Account +Running {product-title} cluster metrics with persistent storage means that your +metrics will be stored to a +xref:../architecture/additional_concepts/storage.adoc#persistent-volumes[persistent +volume] and be able to survive a pod being restarted or recreated. This is ideal +if you require your metrics data to be guarded from data loss. For production +environments it is highly recommended to configure persistent storage for your +metrics pods. -The xref:metrics-deployer[Metrics Deployer] will be discussed in a later step, -but you must first set up a service account for it: +The size requirement of the Cassandra storage is dependent on the number of +pods. It is the administrator's responsibility to ensure that the size +requirements are sufficient for their setup and to monitor usage to ensure that +the disk does not become full. The size of the persisted volume claim is +specified with the `openshift_metrics_cassandra_pvc_size` +xref:../install_config/cluster_metrics.adoc#metrics-ansible-variables[ansible +variable] which is set to 10 GB by default. -. Create a *metrics-deployer* service account: -+ ----- -$ oc create -f - </byo/openshift-cluster/openshift-metrics.yml \ + -e openshift_metrics_install_metrics=True \ + -e openshift_metrics_hawkular_hostname=hawkular-metrics.example.com \ + -e openshift_metrics_cassandra_storage_type=pv ---- ==== @@ -474,9 +557,9 @@ The following command sets the Hawkular Metrics route to use *hawkular-metrics.example.com* and deploy without persistent storage. ---- -$ oc new-app -f metrics-deployer.yaml \ - -p HAWKULAR_METRICS_HOSTNAME=hawkular-metrics.example.com \ - -p USE_PERSISTENT_STORAGE=false +$ ansible-playbook /byo/openshift-cluster/openshift-metrics.yml \ + -e openshift_metrics_install_metrics=True \ + -e openshift_metrics_hawkular_hostname=hawkular-metrics.example.com ---- ==== @@ -486,88 +569,36 @@ Because this is being deployed without persistent storage, metric data loss can occur. ==== +[[metrics-diagnostics]] +=== Metrics Diagnostics -[[metrics-reencrypting-route]] -== Using a Re-encrypting Route - -[NOTE] -==== -The following section is not required if the *hawkular-metrics.pem* secret was -specified as a -xref:../install_config/cluster_metrics.adoc#metrics-deployer-using-secrets[deployer -secret]. -==== - -By default, the Hawkular Metrics server uses an internally signed certificate, -which is not trusted by browsers or other external services. To provide your own -trusted certificate to be used for external access, use a route with -xref:../architecture/core_concepts/routes.adoc#secured-routes[re-encryption -termination]. - -Creating this new route requires deleting the default route that just passes -through to an internally signed certificate: +The are some diagnostics for metrics to assist in evaluating the state of the +metrics stack. To execute diagnostics for metrics: -. First, delete the default route that uses the self-signed certificates: -+ ---- -$ oc delete route hawkular-metrics +$ oadm diagnostics MetricsApiProxy ---- -. Create a new route with -xref:../architecture/core_concepts/routes.adoc#secured-routes[re-encryption -termination] -+ -==== ----- -$ oc create route reencrypt hawkular-metrics-reencrypt \ - --hostname hawkular-metrics.example.com \ <1> - --key /path/to/key \ <2> - --cert /path/to/cert \ <2> - --ca-cert /path/to/ca.crt \ <2> -ifdef::openshift-enterprise[] - --service hawkular-metrics --port 8444 \ -endif::[] -ifdef::openshift-origin[] - --service hawkular-metrics --port 8443 \ -endif::[] - --dest-ca-cert /path/to/internal-ca.crt <3> ----- -<1> The value specified in the `*HAWKULAR_METRICS_HOSTNAME*` template parameter. -<2> These need to define the custom certificate you want to provide. -<3> This needs to correspond to the CA used to sign the internal Hawkular Metrics certificate. -==== -+ -The CA used to sign the internal Hawkular Metrics certificate can be found from -the *hawkular-metrics-certificate* secret: -+ ----- -$ base64 -d <<< \ - `oc get -o yaml secrets hawkular-metrics-certificate \ - | grep -i hawkular-metrics-ca.certificate | awk '{print $2}'` \ - > /path/to/internal-ca.crt ----- - - -[[configuring-openshift-metrics]] -== Configuring OpenShift +[[install-setting-the-metrics-public-url]] +== Setting the Metrics Public URL The {product-title} web console uses the data coming from the Hawkular Metrics service to display its graphs. The URL for accessing the Hawkular Metrics -service must be configured via the `*metricsPublicURL*` option in the +service must be configured with the `metricsPublicURL` option in the xref:../install_config/master_node_configuration.adoc#master-configuration-files[master configuration file] (*_/etc/origin/master/master-config.yaml_*). This URL -corresponds to the route created with the `*HAWKULAR_METRICS_HOSTNAME*` template -parameter during the +corresponds to the route created with the `openshift_metrics_hawkular_hostname` +inventory variable used during the xref:../install_config/cluster_metrics.adoc#deploying-the-metrics-components[deployment] of the metrics components. [NOTE] ==== -You must be able to resolve the `*HAWKULAR_METRICS_HOSTNAME*` from the browser +You must be able to resolve the `openshift_metrics_hawkular_hostname` from the browser accessing the console. ==== -For example, if your `*HAWKULAR_METRICS_HOSTNAME*` corresponds to +For example, if your `openshift_metrics_hawkular_hostname` corresponds to `hawkular-metrics.example.com`, then you must make the following change in the *_master-config.yaml_* file: @@ -591,55 +622,75 @@ displayed on the pod overview pages. If you are using self-signed certificates, remember that the Hawkular Metrics service is hosted under a different host name and uses different certificates than the console. You may need to explicitly open a browser tab to the value -specified in `*metricsPublicURL*` and accept that certificate. +specified in `metricsPublicURL` and accept that certificate. To avoid this issue, use certificates which are configured to be acceptable by your browser. ==== -ifdef::openshift-origin[] +[[cluster-metrics-accessing-hawkular-metrics-directly]] == Accessing Hawkular Metrics Directly -To access and manage metrics more directly, use the Hawkular Metrics API. +To access and manage metrics more directly, use the +link:https://github.com/openshift/origin-metrics/blob/master/docs/hawkular_metrics.adoc#accessing-metrics-using-hawkular-metrics[Hawkular +Metrics API]. + +[NOTE] +==== +When accessing Hawkular Metrics from the API, you will only be able to perform +reads. Writing metrics has been disabled by default. If you want for individual +users to also be able to write metrics, you must set the +`openshift_metrics_hawkular_user_write_access` +xref:../install_config/cluster_metrics.adoc#metrics-ansible-variables[variable] +to *true*. + +However, it is recommended to use the default configuration and only have +metrics enter the system via Heapster. If write access is enabled, any user +will be able to write metrics to the system, which can affect performance and +cause Cassandra disk usage to unpredictably increase. +==== -The link:http://www.hawkular.org/docs/rest/rest-metrics.html[Hawkular Metrics -documentation] covers how to use the API, but there are a few differences when -dealing with the version of Hawkular Metrics configured for use on -{product-title}: +The link:http://www.hawkular.org/docs/rest/rest-metrics.html[Hawkular Metrics documentation] +covers how to use the API, but there are a few differences when dealing with the +version of Hawkular Metrics configured for use on {product-title}: -=== OpenShift Projects & Hawkular Tenants +[[cluster-metrics-openshift-projects-and-hawkular-tenants]] +=== {product-title} Projects and Hawkular Tenants -Hawkular Metrics is a multi-tenanted application. The way its been configured is -that a project in {product-title} corresponds to a tenant in Hawkular Metrics. +Hawkular Metrics is a multi-tenanted application. It is configured so that a +project in {product-title} corresponds to a tenant in Hawkular Metrics. -As such, when accessing metrics for a project named `MyProject` you will need to -set the -link:http://www.hawkular.org/docs/rest/rest-metrics.html#_tenant_header[Hawkular-tenant] -header to `MyProject` +As such, when accessing metrics for a project named *MyProject* you must set the +link:http://www.hawkular.org/docs/rest/rest-metrics.html#_tenant_header[*Hawkular-Tenant*] +header to *MyProject*. -There is also a special tenant named `_system` which contains system level -metrics. This will require either a `cluster-reader` or `cluster-admin` level +There is also a special tenant named *_system* which contains system level +metrics. This requires either a *cluster-reader* or *cluster-admin* level privileges to access. +[[cluster-metrics-authorization]] === Authorization The Hawkular Metrics service will authenticate the user against {product-title} to determine if the user has access to the project it is trying to access. -When accessing the Hawkular Metrics API, you will need to pass a bearer token in -the `Authorization` header. +Hawkular Metrics accepts a bearer token from the client and verifies that token +with the {product-title} server using a *SubjectAccessReview*. If the user has +proper read privileges for the project, they are allowed to read the metrics +for that project. For the *_system* tenant, the user requesting to read from +this tenant must have *cluster-reader* permission. -For more information how how to access the Hawkular Metrics in {product-title}, -please see the -link:https://github.com/openshift/origin-metrics/blob/master/docs/hawkular_metrics.html[Origin -Metrics documentation] +When accessing the Hawkular Metrics API, you must pass a bearer token in the +*Authorization* header. +ifdef::openshift-origin[] +[[cluster-metrics-accessing-heapster-directly]] == Accessing Heapster Directly Heapster has been configured to be only accessible via the API proxy. Accessing it will required either a cluster-reader or cluster-admin privileges. -For example, to access the Heapster `validate` page, you would need to access it +For example, to access the Heapster *validate* page, you need to access it using something similar to: ---- @@ -651,88 +702,54 @@ For more information about Heapster and how to access its APIs, please refer the link:https://github.com/kubernetes/heapster/[Heapster] project. endif::[] -[[cluster-metrics-scaling-openshift-metrics-pods]] -== Scaling {product-title} Metrics Pods +[[metrics-scaling-metrics-pods]] +== Scaling {product-title} Cluster Metrics Pods -One set of metrics pods (Cassandra/Hawkular/Heapster) is able to monitor at -least 10,000 pods. +Information about scaling cluster metrics capabilities is available in the +xref:../scaling_performance/scaling_cluster_metrics.adoc#cluster-metrics-scaling-openshift-metrics-pods[Scaling and +Performance Guide]. -[CAUTION] -==== -Pay attention to system load on nodes where {product-title} metrics pods run. -Use that information to determine if it is necessary to scale out a number of -{product-title} metrics pods and spread the load across multiple {product-title} -nodes. Scaling {product-title} metrics heapster pods is not recommended. -==== +[[metrics-logging]] +== Integration with Aggregated Logging -[[cluster-metrics-scaling-pods-prereqs]] -=== Prerequisites +Hawkular Alerts must be connected to the Aggregated Logging's Elasticsearch to +react on log events. By default, Hawkular will try to find Elasticsearch on its +default place (namespace `logging`, pod `logging-es`) at every boot. If the +Aggregated Logging is installed after Hawkular, the Hawkular Metrics pod might +need to be restarted in order to recognize the new Elasticsearch server. The +Hawkular boot log provides a clear indication if the integration could not be +properly configured, with messages like: -If persistent storage was used to deploy {product-title} metrics, then you must -xref:../dev_guide/persistent_volumes.adoc#dev-guide-persistent-volumes[create a persistent volume (PV)] -for the new Cassandra pod to use before you can scale out the number of -{product-title} metrics Cassandra pods. However, if Cassandra was deployed with -dynamically provisioned PVs, then this step is not necessary. +--- +Failed to import the logging certificate into the store. Continuing, but the logging integration might fail. +--- -[[cluster-metrics-scaling-pods-cassandra]] -=== Scaling the Cassandra Components +or -The Cassandra nodes use persistent storage, therefore scaling up or down is not possible with replication controllers. +--- +Could not get the logging secret! Status code: 000. The Hawkular Alerts integration with Logging might not work properly. +--- -Scaling a Cassandra cluster requires you to use the `hawkular-cassandra-node` template. By default, the Cassandra cluster is a single-node cluster. +This feature is available from ifdef::openshift-origin[] -To add a second node with 10Gi of storage: - ----- -# oc process hawkular-cassandra-node-pv \ - -v IMAGE_PREFIX=openshift/origin- \ - -v IMAGE_VERSION=devel \ - -v PV_SIZE=10Gi \ - -v NODE=2 ----- - -To deploy more nodes, simply increase the `NODE` value. -endif::openshift-origin[] - -ifdef::openshift-enterprise[] -To scale out the number of {product-title} metrics hawkular pods to two -replicas, run: - ----- -# oc scale -n openshift-infra --replicas=2 rc hawkular-metrics ----- -endif::openshift-enterprise[] - -[NOTE] -==== -If you add a new node to a Cassandra cluster, the data stored in the cluster -rebalances across the cluster. The same thing happens If you remove a node from -the Cluster. -==== - + version v1.7. +endif::[] ifdef::openshift-enterprise[] -[[cluster-metrics-horizontal-pod-autoscaling]] -== Horizontal Pod Autoscaling - + version 3.7.0. endif::[] +You can confirm if logging is available by checking the log for an entry like: + +--- +Retrieving the Logging's CA and adding to the trust store, if Logging is available. +--- [[metrics-cleanup]] == Cleanup -You can remove everything deloyed by the metrics deployer by performing the -following steps: - ----- -$ oc delete all --selector="metrics-infra" -$ oc delete sa --selector="metrics-infra" -$ oc delete templates --selector="metrics-infra" -$ oc delete secrets --selector="metrics-infra" -$ oc delete pvc --selector="metrics-infra" ----- - -To remove the deployer components, perform the following steps: +You can remove everything deployed by the OpenShift Ansible `openshift_metrics` role +by performing the following steps: ---- -$ oc delete sa metrics-deployer -$ oc delete secret metrics-deployer +$ ansible-playbook /byo/openshift-cluster/openshift-metrics.yml \ + -e openshift_metrics_install_metrics=False ---- diff --git a/install_config/install/advanced_install.adoc b/install_config/install/advanced_install.adoc index 01bd9ae150f3..4eca18a6b658 100644 --- a/install_config/install/advanced_install.adoc +++ b/install_config/install/advanced_install.adoc @@ -13,12 +13,14 @@ toc::[] == Overview A reference configuration implemented using -http://www.ansible.com[Ansible] playbooks is available as the _advanced +link:http://docs.ansible.com/ansible/[Ansible] playbooks is available as the _advanced installation_ method for installing a {product-title} cluster. Familiarity with Ansible is assumed, however you can use this configuration as a reference to create your own implementation using the configuration management tool of your choosing. -While RHEL Atomic Host is supported for running containerized OpenShift +[IMPORTANT] +==== +While RHEL Atomic Host is supported for running containerized {product-title} services, the advanced installation method utilizes Ansible, which is not available in RHEL Atomic Host, and must therefore be run from ifdef::openshift-enterprise[] @@ -28,10 +30,15 @@ ifdef::openshift-origin[] a supported version of Fedora, CentOS, or RHEL. endif::[] The host initiating the installation does not need to be intended for inclusion -in the OpenShift cluster, but it can be. +in the {product-title} cluster, but it can be. + +Alternatively, a +xref:running-the-advanced-installation-system-container[containerized version of the installer] is available as a system container, which is currently a +Technology Preview feature. +==== ifdef::openshift-enterprise[] -Alternatively, you can use the xref:quick_install.adoc#install-config-install-quick-install[quick installation] +Alternatively, you can use the xref:../../install_config/install/quick_install.adoc#install-config-install-quick-install[quick installation] method if you prefer an interactive installation experience. endif::[] @@ -44,128 +51,53 @@ xref:../../install_config/install/stand_alone_registry.adoc#install-config-insta [[advanced-before-you-begin]] == Before You Begin -Before installing OpenShift, you must first see the xref:../../install_config/install/prerequisites.adoc#install-config-install-prerequisites[Prerequisites] topic to -prepare your hosts, which includes verifying system and environment requirements -per component type and properly installing and configuring Docker. It also -includes installing Ansible version 1.8.4 or later, as the advanced installation -method is based on Ansible playbooks and as such requires directly invoking -Ansible. - -If you are interested in installing OpenShift using the containerized method +Before installing {product-title}, you must first see the +xref:../../install_config/install/prerequisites.adoc#install-config-install-prerequisites[Prerequisites] +and +xref:../../install_config/install/host_preparation.adoc#install-config-install-host-preparation[Host +Preparation] topics to prepare your hosts. This includes verifying system and +environment requirements per component type and properly installing and +configuring Docker. It also includes installing Ansible version 2.2.0 or later, +as the advanced installation method is based on Ansible playbooks and as such +requires directly invoking Ansible. + +If you are interested in installing {product-title} using the containerized method (optional for RHEL but required for RHEL Atomic Host), see -xref:../../install_config/install/rpm_vs_containerized.adoc#install-config-install-rpm-vs-containerized[RPM vs -Containerized] to ensure that you understand the differences between these +xref:../../install_config/install/rpm_vs_containerized.adoc#install-config-install-rpm-vs-containerized[Installing on Containerized Hosts] to ensure that you understand the differences between these methods, then return to this topic to continue. +For large-scale installs, including suggestions for optimizing install time, +see the +xref:../../scaling_performance/install_practices.adoc#scaling-performance-install-best-practices[Scaling and Performance Guide]. + After following the instructions in the xref:../../install_config/install/prerequisites.adoc#install-config-install-prerequisites[Prerequisites] topic and deciding between the RPM and containerized methods, you can continue in this -topic to xref:configuring-ansible[Configuring Ansible]. +topic to xref:configuring-ansible[Configuring Ansible Inventory Files]. [[configuring-ansible]] +== Configuring Ansible Inventory Files -== Configuring Ansible - -The *_/etc/ansible/hosts_* file is Ansible's inventory file for the playbook to -use during the installation. The inventory file describes the configuration for -your OpenShift cluster. You must replace the default contents of the file with -your desired configuration. +The *_/etc/ansible/hosts_* file is Ansible's inventory file for the playbook +used to install {product-title}. The inventory file describes the configuration +for your {product-title} cluster. You must replace the default contents of the +file with your desired configuration. The following sections describe commonly-used variables to set in your inventory -file during an advanced installation, followed by example inventory files you -can use as a starting point for your installation. The examples describe various -environment topographies, including xref:multiple-masters[using multiple -masters for high availability]. You can choose an example that matches your -requirements, modify it to match your own environment, and use it as your -inventory file when xref:running-the-advanced-installation[running the advanced -installation]. - -[[configuring-host-variables]] -=== Configuring Host Variables - -To assign environment variables to hosts during the Ansible installation, indicate -the desired variables in the *_/etc/ansible/hosts_* file after the host entry in -the *[masters]* or *[nodes]* sections. For example: - -==== ----- -[masters] -ec2-52-6-179-239.compute-1.amazonaws.com openshift_public_hostname=ose3-master.public.example.com ----- -==== - -The following table describes variables for use with the Ansible installer that -can be assigned to individual host entries: - -[[advanced-host-variables]] -.Host Variables -[options="header"] -|=== - -|Variable |Purpose - -|`*openshift_hostname*` -|This variable overrides the internal cluster host name for the system. Use this -when the system's default IP address does not resolve to the system host name. - -|`*openshift_public_hostname*` -|This variable overrides the system's public host name. Use this for cloud -installations, or for hosts on networks using a network address translation -(NAT). - -|`*openshift_ip*` -|This variable overrides the cluster internal IP address for the system. Use -this when using an interface that is not configured with the default route. - -|`*openshift_public_ip*` -|This variable overrides the system's public IP address. Use this for cloud -installations, or for hosts on networks using a network address translation -(NAT). - -|`*containerized*` -|If set to *true*, containerized OpenShift services are run on target master and -node hosts instead of installed using RPM packages. If set to *false* or unset, -the default RPM method is used. RHEL Atomic Host requires the containerized -method, and is automatically selected for you based on the detection of the -*_/run/ostree-booted_* file. See -xref:../../install_config/install/rpm_vs_containerized.adoc#install-config-install-rpm-vs-containerized[RPM vs -Containerized] for more details. -ifdef::openshift-enterprise[] -Containerized installations are supported starting in OSE 3.1.1. -endif::[] - -|`*openshift_node_labels*` -|This variable adds labels to nodes during installation. See -xref:configuring-node-host-labels[Configuring Node Host Labels] for more -details. - -|`*openshift_node_kubelet_args*` -|This variable is used to configure `kubeletArguments` on nodes, such as -arguments used in xref:../../admin_guide/garbage_collection.adoc#admin-guide-garbage-collection[container and -image garbage collection], and to -xref:../../admin_guide/manage_nodes.adoc#configuring-node-resources[specify -resources per node]. `kubeletArguments` are key value pairs that are passed -directly to the Kubelet that match the -http://kubernetes.io/v1.1/docs/admin/kubelet.html[Kubelet's command line -arguments]. `kubeletArguments` are not migrated or validated and may become -invalid if used. These values override other settings in node configuration -which may cause invalid configurations. Example usage: -*{'image-gc-high-threshold': ['90'],'image-gc-low-threshold': ['80']}*. - -|`*openshift_hosted_router_selector*` -|Default node selector for automatically deploying router pods. See -xref:configuring-node-host-labels[Configuring Node Host Labels] for details. - -|`*openshift_registry_selector*` -|Default node selector for automatically deploying registry pods. See -xref:configuring-node-host-labels[Configuring Node Host Labels] for details. +file during an advanced installation, followed by +xref:adv-install-example-inventory-files[example inventory files] you can use as +a starting point for your installation. -|`*openshift_docker_options*` -|This variable configures additional Docker options within *_/etc/sysconfig/docker_*, such as -options used in xref:../../install_config/install/host_preparation.adoc#managing-docker-container-logs[Managing Container Logs]. -Example usage: *"--log-driver json-file --log-opt max-size=1M --log-opt max-file=3"*. +Many of the Ansible variables described are optional. Accepting the default +values should suffice for development environments, but for production +environments it is recommended you read through and become familiar with the +various options available. -|=== +The example inventories describe various environment topographies, including +xref:multiple-masters[using multiple masters for high availability]. You can +choose an example that matches your requirements, modify it to match your own +environment, and use it as your inventory file when +xref:running-the-advanced-installation[running the advanced installation]. [discrete] [[advanced-install-image-version-policy]] @@ -173,112 +105,155 @@ Example usage: *"--log-driver json-file --log-opt max-size=1M --log-opt max-file Images require a version number policy in order to maintain updates. See the -xref:../../architecture/core_concepts/containers_and_images.adoc#architecture-images-tag-policy][Image +xref:../../architecture/core_concepts/containers_and_images.adoc#architecture-images-tag-policy[Image Version Tag Policy] section in the Architecture Guide for more information. [[configuring-cluster-variables]] === Configuring Cluster Variables To assign environment variables during the Ansible install that apply more -globally to your OpenShift cluster overall, indicate the desired variables in +globally to your {product-title} cluster overall, indicate the desired variables in the *_/etc/ansible/hosts_* file on separate, single lines within the *[OSEv3:vars]* section. For example: -==== ---- [OSEv3:vars] -openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] +openshift_master_identity_providers=[{'name': 'htpasswd_auth', +'login': 'true', 'challenge': 'true', +'kind': 'HTPasswdPasswordIdentityProvider', +'filename': '/etc/origin/master/htpasswd'}] openshift_master_default_subdomain=apps.test.example.com ---- -==== The following table describes variables for use with the Ansible installer that can be assigned cluster-wide: [[cluster-variables-table]] .Cluster Variables -[options="header", cols="1,2"] +[options="header"] |=== |Variable |Purpose -|`*ansible_ssh_user*` +|`ansible_ssh_user` |This variable sets the SSH user for the installer to use and defaults to -*root*. This user should allow SSH-based authentication -xref:host_preparation.adoc#ensuring-host-access[without requiring a password]. If +`root`. This user should allow SSH-based authentication +xref:../../install_config/install/host_preparation.adoc#ensuring-host-access[without requiring a password]. If using SSH key-based authentication, then the key should be managed by an SSH agent. -|`*ansible_sudo*` -|If `*ansible_ssh_user*` is not *root*, this variable must be set to *true* and -the user must be configured for passwordless *sudo*. +|`ansible_become` +|If `ansible_ssh_user` is not `root`, this variable must be set to `true` and +the user must be configured for passwordless `sudo`. + +|`debug_level` +a|This variable sets which INFO messages are logged to the `systemd-journald.service`. Set one of the following: + +* `0` to log errors and warnings only +* `2` to log normal information (This is the default level.) +* `4` to log debugging-level information +* `6` to log API-level debugging information (request / response) +* `8` to log body-level API debugging information + +For more information on debug log levels, see xref:../../install_config/master_node_configuration.adoc#master-node-config-logging-levels[Configuring Logging Levels]. -|`*containerized*` -|If set to *true*, containerized OpenShift services are run on all target master +|`containerized` +|If set to `true`, containerized {product-title} services are run on all target master and node hosts in the cluster instead of installed using RPM packages. If set to -*false* or unset, the default RPM method is used. RHEL Atomic Host requires the +`false` or unset, the default RPM method is used. RHEL Atomic Host requires the containerized method, and is automatically selected for you based on the detection of the *_/run/ostree-booted_* file. See -xref:../../install_config/install/rpm_vs_containerized.adoc#install-config-install-rpm-vs-containerized[RPM vs -Containerized] for more details. +xref:../../install_config/install/rpm_vs_containerized.adoc#install-config-install-rpm-vs-containerized[Installing on +Containerized Hosts] for more details. ifdef::openshift-enterprise[] -Containerized installations are supported starting in OSE 3.1.1. +Containerized installations are supported starting in {product-title} 3.1.1. endif::[] -|`*openshift_master_cluster_hostname*` +|`openshift_master_admission_plugin_config` +a|This variable sets the parameter and arbitrary JSON values as per the requirement in your inventory hosts file. For example: + +---- +openshift_master_admission_plugin_config={"ClusterResourceOverride":{"configuration":{"apiVersion":"v1","kind":"ClusterResourceOverrideConfig","memoryRequestToLimitPercent":"25","cpuRequestToLimitPercent":"25","limitCPUToMemoryPercent":"200"}}} +---- + +|`openshift_master_audit_config` +|This variable enables API service auditing. See +xref:../../install_config/master_node_configuration.adoc#master-node-config-audit-config[Audit +Configuration] for more information. + +|`openshift_master_cluster_hostname` |This variable overrides the host name for the cluster, which defaults to the host name of the master. -|`*openshift_master_cluster_public_hostname*` +|`openshift_master_cluster_public_hostname` |This variable overrides the public host name for the cluster, which defaults to the host name of the master. -|`*openshift_master_cluster_method*` +|`openshift_master_cluster_method` |Optional. This variable defines the HA method when deploying multiple masters. Supports the `native` method. See xref:multiple-masters[Multiple Masters] for more information. -|`*openshift_rolling_restart_mode*` +|`openshift_rolling_restart_mode` |This variable enables rolling restarts of HA masters (i.e., masters are taken down one at a time) when -xref:../upgrading/automated_upgrades.adoc#running-the-upgrade-playbook-directly[running +xref:../../install_config/upgrading/automated_upgrades.adoc#running-the-upgrade-playbook-directly[running the upgrade playbook directly]. It defaults to `services`, which allows rolling restarts of services on the masters. It can instead be set to `system`, which enables rolling, full system restarts and also works for single master clusters. -|`*os_sdn_network_plugin_name*` +|`os_sdn_network_plugin_name` |This variable configures which xref:../../architecture/networking/sdn.adoc#architecture-additional-concepts-sdn[OpenShift SDN plug-in] to use for the pod network, which defaults to `redhat/openshift-ovs-subnet` for the standard SDN plug-in. Set the variable to `redhat/openshift-ovs-multitenant` to use the multitenant plug-in. -|`*openshift_master_identity_providers*` +|`openshift_master_identity_providers` |This variable overrides the xref:../../install_config/configuring_authentication.adoc#install-config-configuring-authentication[identity provider], which defaults to xref:../../install_config/configuring_authentication.adoc#DenyAllPasswordIdentityProvider[Deny All]. -|`*openshift_master_named_certificates*` +|`openshift_master_named_certificates` .2+.^|These variables are used to configure xref:../../install_config/certificate_customization.adoc#install-config-certificate-customization[custom certificates] which are deployed as part of the installation. See xref:advanced-install-custom-certificates[Configuring Custom Certificates] for more information. +|`openshift_master_overwrite_named_certificates` + +|`openshift_hosted_registry_cert_expire_days` +|Validity of the auto-generated registry certificate in days. Defaults to `730` (2 years). + +|`openshift_ca_cert_expire_days` +|Validity of the auto-generated CA certificate in days. Defaults to `1825` (5 years). + +|`openshift_node_cert_expire_days` +|Validity of the auto-generated node certificate in days. Defaults to `730` (2 years). -|`*openshift_master_overwrite_named_certificates*` +|`openshift_master_cert_expire_days` +|Validity of the auto-generated master certificate in days. Defaults to `730` (2 years). -|`*openshift_master_session_name*` +|`etcd_ca_default_days` +|Validity of the auto-generated external etcd certificates in days. Controls +validity for etcd CA, peer, server and client certificates. Defaults to `1825` +(5 years). + +|`os_firewall_use_firewalld` +|Set to `true` to use firewalld instead of the default iptables. Not available on RHEL Atomic Host. See the xref:advanced-install-configuring-firewalls[Configuring the Firewall] section for more information. + +|`openshift_master_session_name` .4+.^|These variables override defaults for xref:../../install_config/configuring_authentication.adoc#session-options[session options] in the OAuth configuration. See xref:advanced-install-session-options[Configuring Session Options] for more information. -|`*openshift_master_session_max_seconds*` +|`openshift_master_session_max_seconds` -|`*openshift_master_session_auth_secrets*` +|`openshift_master_session_auth_secrets` -|`*openshift_master_session_encryption_secrets*` +|`openshift_master_session_encryption_secrets` -|`*openshift_master_portal_net*` +|`openshift_portal_net` |This variable configures the subnet in which xref:../../architecture/core_concepts/pods_and_services.adoc#services[services] will be created within the @@ -288,152 +263,316 @@ existing network blocks in your infrastructure to which pods, nodes, or the master may require access to, or the installation will fail. Defaults to `172.30.0.0/16`, and cannot be re-configured after deployment. If changing from the default, avoid `172.17.0.0/16`, which the *docker0* network bridge uses by default, or modify the *docker0* network. -|`*openshift_master_default_subdomain*` +|`openshift_master_default_subdomain` |This variable overrides the default subdomain to use for exposed xref:../../architecture/networking/routes.adoc#architecture-core-concepts-routes[routes]. -|`*osm_default_node_selector*` +|`openshift_master_image_policy_config` +|Sets `imagePolicyConfig` in the master configuration. See xref:../../install_config/master_node_configuration.adoc#master-config-image-config[Image Configuration] for details. + +|`openshift_node_proxy_mode` +|This variable specifies the +xref:../../architecture/core_concepts/pods_and_services.adoc#service-proxy-mode[service +proxy mode] to use: either `iptables` for the default, pure-`iptables` +implementation, or `userspace` for the user space proxy. + +|`osm_default_node_selector` |This variable overrides the node selector that projects will use by default when placing pods. -|`*osm_cluster_network_cidr*` +|`osm_cluster_network_cidr` | This variable overrides the xref:../../architecture/networking/sdn.adoc#sdn-design-on-masters[SDN cluster network] CIDR block. This is the network from which pod IPs are assigned. This network block should be a private block and must not conflict with existing network blocks in your infrastructure to which pods, nodes, or the -master may require access. Defaults to *10.128.0.0/14* and *cannot* be arbitrarily +master may require access. Defaults to `10.128.0.0/14` and cannot be arbitrarily re-configured after deployment, although certain changes to it can be made in -the xref:../../install_config/configuring_sdn.adoc#configuring-the-pod-network-on-masters[SDN master configuration]. +the xref:../../install_config/configuring_sdn.adoc#configuring-the-pod-network-on-masters[SDN +master configuration]. -|`*osm_host_subnet_length*` +|`osm_host_subnet_length` |This variable specifies the size of the per host subnet allocated for pod IPs by xref:../../architecture/networking/sdn.adoc#sdn-design-on-masters[{product-title} SDN]. Defaults to `9` which means that a subnet of size /23 is allocated to each host; for example, given the default 10.128.0.0/14 cluster network, this will -allocate 10.128.0.0/23, 10.128.2.0/23, 10.128.4.0/23, and so on. This *cannot* be +allocate 10.128.0.0/23, 10.128.2.0/23, 10.128.4.0/23, and so on. This cannot be re-configured after deployment. + +|`openshift_use_flannel` +|This variable enables *flannel* as an alternative networking layer instead of +the default SDN. If enabling *flannel*, disable the default SDN with the +`openshift_use_openshift_sdn` variable. For more information, see xref:../install_config/configuring_sdn.adoc#using-flannel[Using Flannel]. + +|`openshift_docker_additional_registries` +|{product-title} adds the specified additional registry or registries to the +*docker* configuration. These are the registries to search. + +|`openshift_docker_insecure_registries` +|{product-title} adds the specified additional insecure registry or registries to +the *docker* configuration. For any of these registries, secure sockets layer +(SSL) is not verified. Also, add these registries to +`openshift_docker_additional_registries`. + +|`openshift_docker_blocked_registries` +|{product-title} adds the specified blocked registry or registries to the +*docker* configuration. Block the listed registries. Setting this to `all` +blocks everything not in the other variables. + +|`openshift_metrics_hawkular_hostname` +|This variable sets the host name for integration with the metrics console by +overriding `metricsPublicURL` in the master configuration for cluster metrics. +If you alter this variable, ensure the host name is accessible via your router. +See xref:advanced-install-cluster-metrics[Configuring Cluster Metrics] for +details. + +|`openshift_template_service_broker_namespaces` +|This variable enables the template service broker by specifying one of more +namespaces whose templates will be served by the broker. |=== -[[advanced-install-configuring-global-proxy]] -=== Configuring Global Proxy Options +[[advanced-install-deployment-types]] +=== Configuring Deployment Type -If your hosts require use of a HTTP or HTTPS proxy in order to connect to -external hosts, there are many components that must be configured to use the -proxy, including masters, Docker, and builds. Node services only connect to the -master API requiring no external access and therefore do not need to be -configured to use a proxy. +Various defaults used throughout the playbooks and roles used by the installer +are based on the deployment type configuration (usually defined in an Ansible +inventory file). -In order to simplify this configuration, the following Ansible variables can be -specified at a cluster or host level to apply these settings uniformly across -your environment. +ifdef::openshift-enterprise[] +Ensure the `deployment_type` parameter in your inventory file's `[OSEv3:vars]` +section is set to `openshift-enterprise` to install the {product-title} variant: -[NOTE] -==== -See xref:../../install_config/build_defaults_overrides.adoc#install-config-build-defaults-overrides[Configuring -Global Build Defaults and Overrides] for more information on how the proxy -environment is defined for builds. -==== +---- +[OSEv3:vars] +deployment_type=openshift-enterprise +---- +endif::[] +ifdef::openshift-origin[] +Ensure the `deployment_type` parameter in your inventory file's `[OSEv3:vars]` +section is set to `origin` to install the {product-title} variant: -.Cluster Proxy Variables +---- +[OSEv3:vars] +openshift_deployment_type=origin +---- +endif::[] + + +[[configuring-host-variables]] +=== Configuring Host Variables + +To assign environment variables to hosts during the Ansible installation, indicate +the desired variables in the *_/etc/ansible/hosts_* file after the host entry in +the *[masters]* or *[nodes]* sections. For example: + +---- +[masters] +ec2-52-6-179-239.compute-1.amazonaws.com openshift_public_hostname=ose3-master.public.example.com +---- + +The following table describes variables for use with the Ansible installer that +can be assigned to individual host entries: + +[[advanced-host-variables]] +.Host Variables [options="header"] |=== |Variable |Purpose -|`*openshift_http_proxy*` -|This variable specifies the `*HTTP_PROXY*` environment variable for masters and -the Docker daemon. +|`openshift_hostname` +|This variable overrides the internal cluster host name for the system. Use this +when the system's default IP address does not resolve to the system host name. -|`*openshift_https_proxy*` -|This variable specifices the `*HTTPS_PROXY*` environment variable for masters -and the Docker daemon. +|`openshift_public_hostname` +|This variable overrides the system's public host name. Use this for cloud +installations, or for hosts on networks using a network address translation +(NAT). -|`*openshift_no_proxy*` -|This variable is used to set the `*NO_PROXY*` environment variable for masters -and the Docker daemon. This value should be set to a comma separated list of -host names or wildcard host names that should not use the defined proxy. This -list will be augmented with the list of all defined {product-title} host names -by default. +|`openshift_ip` +|This variable overrides the cluster internal IP address for the system. Use +this when using an interface that is not configured with the default route. -|`*openshift_generate_no_proxy_hosts*` -|This boolean variable specifies whether or not the names of all defined -OpenShift hosts and `pass:[*.cluster.local]` should be automatically appended to -the `*NO_PROXY*` list. Defaults to *true*; set it to *false* to override this -option. +|`openshift_public_ip` +|This variable overrides the system's public IP address. Use this for cloud +installations, or for hosts on networks using a network address translation +(NAT). -|`*openshift_builddefaults_http_proxy*` -|This variable defines the `*HTTP_PROXY*` environment variable inserted into -builds using the `*BuildDefaults*` admission controller. If -`*openshift_http_proxy*` is set, this variable will inherit that value; you only -need to set this if you want your builds to use a different value. +|`containerized` +|If set to *true*, containerized {product-title} services are run on the target master and +node hosts instead of installed using RPM packages. If set to *false* or unset, +the default RPM method is used. RHEL Atomic Host requires the containerized +method, and is automatically selected for you based on the detection of the +*_/run/ostree-booted_* file. See +xref:../../install_config/install/rpm_vs_containerized.adoc#install-config-install-rpm-vs-containerized[Installing on Containerized Hosts] for more details. +ifdef::openshift-enterprise[] +Containerized installations are supported starting in {product-title} 3.1.1. +endif::[] -|`*openshift_builddefaults_https_proxy*` -|This variable defines the `*HTTPS_PROXY*` environment variable inserted into -builds using the `*BuildDefaults*` admission controller. If -`*openshift_https_proxy*` is set, this variable will inherit that value; you -only need to set this if you want your builds to use a different value. +|`openshift_node_labels` +|This variable adds labels to nodes during installation. See +xref:configuring-node-host-labels[Configuring Node Host Labels] for more +details. -|`*openshift_builddefaults_no_proxy*` -|This variable defines the `*NO_PROXY*` environment variable inserted into -builds using the `*BuildDefaults*` admission controller. If -`*openshift_no_proxy*` is set, this variable will inherit that value; you only -need to set this if you want your builds to use a different value. +|`openshift_node_kubelet_args` +|This variable is used to configure `kubeletArguments` on nodes, such as +arguments used in xref:../../admin_guide/garbage_collection.adoc#admin-guide-garbage-collection[container and +image garbage collection], and to +xref:../../admin_guide/manage_nodes.adoc#configuring-node-resources[specify +resources per node]. `kubeletArguments` are key value pairs that are passed +directly to the Kubelet that match the +https://kubernetes.io/docs/admin/kubelet/[Kubelet's command line +arguments]. `kubeletArguments` are not migrated or validated and may become +invalid if used. These values override other settings in node configuration +which may cause invalid configurations. Example usage: +*{'image-gc-high-threshold': ['90'],'image-gc-low-threshold': ['80']}*. -|`*openshift_builddefaults_git_http_proxy*` -|This variable defines the HTTP proxy used by `git clone` operations during a -build, defined using the `*BuildDefaults*` admission controller. If -`*openshift_builddefaults_http_proxy*` is set, this variable will inherit that -value; you only need to set this if you want your `git clone` operations to use -a different value. +|`openshift_hosted_router_selector` +|Default node selector for automatically deploying router pods. See +xref:configuring-node-host-labels[Configuring Node Host Labels] for details. -|`*openshift_builddefaults_git_https_proxy*` -|This variable defines the HTTPS proxy used by `git clone` operations during a -build, defined using the `*BuildDefaults*` admission controller. If -`*openshift_builddefaults_https_proxy*` is set, this variable will inherit that -value; you only need to set this if you want your `git clone` operations to use -a different value. +|`openshift_registry_selector` +|Default node selector for automatically deploying registry pods. See +xref:configuring-node-host-labels[Configuring Node Host Labels] for details. + +|`openshift_docker_options` +|This variable configures additional `docker` options within +*_/etc/sysconfig/docker_*, such as options used in +xref:../../install_config/install/host_preparation.adoc#managing-docker-container-logs[Managing Container Logs]. Example usage: *"--log-driver json-file --log-opt max-size=1M +--log-opt max-file=3"*. Do not use when +xref:advanced-install-docker-system-container[running `docker` as a system container]. + +|`openshift_schedulable` +|This variable configures whether the host is marked as a schedulable node, +meaning that it is available for placement of new pods. See +xref:marking-masters-as-unschedulable-nodes[Configuring Schedulability on Masters]. |=== -[[configuring-node-host-labels]] -=== Configuring Node Host Labels +[[configuring-host-port]] +=== Configuring Master API and Console Ports -You can assign -xref:../../architecture/core_concepts/pods_and_services.adoc#labels[labels] to -node hosts during the Ansible install by configuring the *_/etc/ansible/hosts_* -file. Labels are useful for determining the placement of pods onto nodes using -the xref:../../admin_guide/scheduler.adoc#configurable-predicates[scheduler]. -Other than *region=infra* (discussed below), the actual label names and values -are arbitrary and can be assigned however you see fit per your cluster's -requirements. +To configure the default ports used by the master API and web console, configure +the following variables in the *_/etc/ansible/hosts_* file: -To assign labels to a node host during an Ansible install, use the -`*openshift_node_labels*` variable with the desired labels added to the desired -node host entry in the *[nodes]* section. In the following example, labels are -set for a region called *primary* and a zone called *east*: +[[advanced-master-ports]] +.Master API and Console Ports +[options="header"] +|=== + +|Variable |Purpose +|openshift_master_api_port +|This variable sets the port number to access the {product-title} API. + +|openshift_master_console_port +|This variable sets the console port number to access the {product-title} console with a web browser. +|=== + +For example: -==== ---- -[nodes] -node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" +openshift_master_api_port=3443 +openshift_master_console_port=8756 ---- -==== -The `*openshift_router_selector*` and `*openshift_registry_selector*` Ansible -settings are set to *region=infra* by default: +[[configuring-cluster-pre-install-checks]] +=== Configuring Cluster Pre-install Checks + +Pre-install checks are a set of diagnostic tasks that run as part of the +*openshift_health_checker* Ansible role. They run prior to an Ansible +installation of {product-title}, ensure that required inventory values are set, +and identify potential issues on a host that can prevent or interfere with a +successful installation. + +The following table describes available pre-install checks that will run before +every Ansible installation of {product-title}: + +[[configuring-cluster-pre-install-checks-pre-install-checks]] +.Pre-install Checks +[options="header"] +|=== + +|Check Name |Purpose + +|`memory_availability` +|This check ensures that a host has the recommended amount of memory for the +specific deployment of {product-title}. Default values have been derived from +the +xref:../../install_config/install/prerequisites.adoc#system-requirements[latest +installation documentation]. A user-defined value for minimum memory +requirements may be set by setting the `openshift_check_min_host_memory_gb` +cluster variable in your inventory file. + +|`disk_availability` +|This check only runs on etcd, master, and node hosts. It ensures that the mount +path for an {product-title} installation has sufficient disk space remaining. +Recommended disk values are taken from the +xref:../../install_config/install/prerequisites.adoc#system-requirements[latest +installation documentation]. A user-defined value for minimum disk space +requirements may be set by setting `openshift_check_min_host_disk_gb` cluster +variable in your inventory file. + +|`docker_storage` +|Only runs on hosts that depend on the *docker* daemon (nodes and containerized +installations). Checks that *docker*'s total usage does not exceed a +user-defined limit. If no user-defined limit is set, *docker*'s maximum usage +threshold defaults to 90% of the total size available. The threshold limit for +total percent usage can be set with a variable in your inventory file: +`max_thinpool_data_usage_percent=90`. A user-defined limit for maximum thinpool +usage may be set by setting the `max_thinpool_data_usage_percent` cluster +variable in your inventory file. + +|`docker_storage_driver` +|Ensures that the *docker* daemon is using a storage driver supported by +{product-title}. If the +https://docs.docker.com/engine/userguide/storagedriver/device-mapper-driver[`devicemapper`] +storage driver is being used, the check additionally ensures that a loopback +device is not being used. + +|`docker_image_availability` +|Attempts to ensure that images required by an {product-title} installation are +available either locally or in at least one of the configured container image +registries on the host machine. + +|`package_version` +|Runs on `yum`-based systems determining if multiple releases of a required +{product-title} package are available. Having multiple releases of a package +available during an `enterprise` installation of OpenShift suggests that there +are multiple `yum` repositories enabled for different releases, which may lead +to installation problems. This check is skipped if the `openshift_release` +variable is not defined in the inventory file. + +|`package_availability` +|Runs prior to non-containerized installations of {product-title}. Ensures that +RPM packages required for the current installation are available. + +|`package_update` +|Checks whether a `yum` update or package installation will succeed, without +actually performing it or running `yum` on the host. +|=== + +To disable specific pre-install checks, include the variable +`openshift_disable_check` with a comma-delimited list of check names in your +inventory file. For example: -==== ---- -# default selectors for router and registry services -# openshift_router_selector='region=infra' -# openshift_registry_selector='region=infra' +openshift_disable_check=memory_availability,disk_availability ---- + +[NOTE] +==== +A similar set of health checks meant to run for diagnostics on existing clusters +can be found in +xref:../../admin_guide/diagnostics_tool.adoc#admin-guide-health-checks-via-ansible-playbook[Ansible-based Health Checks]. Another set of checks for checking certificate expiration can be +found in +xref:../../install_config/redeploying_certificates.adoc#install-config-redeploying-certificates[Redeploying Certificates]. ==== -The default router and registry will be automatically deployed if nodes exist -that match the selector settings above. For example: +[[advanced-install-configuring-system-containers]] +=== Configuring System Containers +// tag::syscontainers_techpreview[] +[IMPORTANT] ==== All system container components are ifdef::openshift-enterprise[] @@ -566,8 +705,9 @@ If you are using an image registry other than the default at *_/etc/ansible/hosts_* file. ---- -oreg_url=example.com/openshift3/ose-${component}:${version} +oreg_url={registry}/openshift3/ose-${component}:${version} openshift_examples_modify_imagestreams=true +openshift_docker_additional_registries={registry} ---- .Registry Variables @@ -580,24 +720,121 @@ openshift_examples_modify_imagestreams=true |`*openshift_examples_modify_imagestreams*` |Set to `true` if pointing to a registry other than the default. Modifies the image stream location to the value of `*oreg_url*`. + +|`*openshift_docker_additional_registries*` +|Specify the additional registry or registries. |=== -[[advanced-install-glusterfs-persistent-storage]] -=== Configuring GlusterFS Persistent Storage +For example: +---- +oreg_url=example.com/openshift3/ose-${component}:${version} +openshift_examples_modify_imagestreams=true +openshift_docker_additional_registries=example.com +---- -GlusterFS can be configured to provide -xref:../../architecture/additional_concepts/storage.adoc#architecture-additional-concepts-storage[peristent storage] and -xref:../../install_config/storage_examples/gluster_dynamic_example.adoc#install-config-persistent-storage-dynamically-provisioning-pvs[dynamic provisioning] for {product-title}. It can be used both containerized within -{product-title} and non-containerized on its own nodes. +[[advanced-install-registry-storage]] +==== Configuring Registry Storage -[[advanced-install-containerized-glusterfs-persistent-storage]] -==== Configuring Containerized GlusterFS Persistent Storage +There are several options for enabling registry storage when using the advanced +install: -ifdef::openshift-enterprise[] -This option utilizes -link:https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html/container-native_storage_for_openshift_container_platform/[Red Hat Container Native Storage (CNS)] for configuring containerized GlusterFS persistent storage in {product-title}. -endif::[] -ifdef::openshift-origin[] +[discrete] +[[advanced-install-registry-storage-nfs-host-group]] +===== Option A: NFS Host Group + +When the following variables are set, an NFS volume is created during an +advanced install with the path *_/_* on the host +within the `[nfs]` host group. For example, the volume path using these options +would be *_/exports/registry_*: + +---- +[OSEv3:vars] + +openshift_hosted_registry_storage_kind=nfs +openshift_hosted_registry_storage_access_modes=['ReadWriteMany'] +openshift_hosted_registry_storage_nfs_directory=/exports +openshift_hosted_registry_storage_nfs_options='*(rw,root_squash)' +openshift_hosted_registry_storage_volume_name=registry +openshift_hosted_registry_storage_volume_size=10Gi +---- + +[discrete] +[[advanced-install-registry-storage-external-nfs]] +===== Option B: External NFS Host + +To use an external NFS volume, one must already exist with a path of +*_/_* on the storage host. The remote volume path +using the following options would be *_nfs.example.com:/exports/registry_*. + +---- +[OSEv3:vars] + +openshift_hosted_registry_storage_kind=nfs +openshift_hosted_registry_storage_access_modes=['ReadWriteMany'] +openshift_hosted_registry_storage_host=nfs.example.com +openshift_hosted_registry_storage_nfs_directory=/exports +openshift_hosted_registry_storage_volume_name=registry +openshift_hosted_registry_storage_volume_size=10Gi +---- + +[discrete] +[[advanced-install-registry-storage-openstack]] +===== Option C: OpenStack Platform + +An OpenStack storage configuration must already exist. + +---- +openshift_hosted_registry_storage_kind=openstack +openshift_hosted_registry_storage_access_modes=['ReadWriteOnce'] +openshift_hosted_registry_storage_openstack_filesystem=ext4 +openshift_hosted_registry_storage_openstack_volumeID=3a650b4f-c8c5-4e0a-8ca5-eaee11f16c57 +openshift_hosted_registry_storage_volume_size=10Gi +---- + +[discrete] +[[advanced-install-registry-storage-aws]] +===== Option D: AWS or Another S3 Storage Solution + +The simple storage solution (S3) bucket must already exist. + +---- +#openshift_hosted_registry_storage_kind=object +#openshift_hosted_registry_storage_provider=s3 +#openshift_hosted_registry_storage_s3_accesskey=access_key_id +#openshift_hosted_registry_storage_s3_secretkey=secret_access_key +#openshift_hosted_registry_storage_s3_bucket=bucket_name +#openshift_hosted_registry_storage_s3_region=bucket_region +#openshift_hosted_registry_storage_s3_chunksize=26214400 +#openshift_hosted_registry_storage_s3_rootdirectory=/registry +#openshift_hosted_registry_pullthrough=true +#openshift_hosted_registry_acceptschema2=true +#openshift_hosted_registry_enforcequota=true +---- + +If you are using a different S3 service, such as Minio or ExoScale, also add the +region endpoint parameter: + +---- +openshift_hosted_registry_storage_s3_regionendpoint=https://myendpoint.example.com/ +---- + + +[[advanced-install-glusterfs-persistent-storage]] +=== Configuring GlusterFS Persistent Storage + +GlusterFS can be configured to provide +xref:../../architecture/additional_concepts/storage.adoc#architecture-additional-concepts-storage[peristent storage] and +xref:../../install_config/storage_examples/gluster_dynamic_example.adoc#install-config-persistent-storage-dynamically-provisioning-pvs[dynamic provisioning] for {product-title}. It can be used both containerized within +{product-title} and non-containerized on its own nodes. + +[[advanced-install-containerized-glusterfs-persistent-storage]] +==== Configuring Containerized GlusterFS Persistent Storage + +ifdef::openshift-enterprise[] +This option utilizes +link:https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html/container-native_storage_for_openshift_container_platform/[Red Hat Container Native Storage (CNS)] for configuring containerized GlusterFS persistent storage in {product-title}. +endif::[] +ifdef::openshift-origin[] See link:https://github.com/gluster/gluster-kubernetes[Running Containerized GlusterFS in Kubernetes] for additional information on containerized storage using GlusterFS. endif::[] @@ -657,8 +894,33 @@ the GlusterFS node. + ---- [nodes] -node1.example.com openshift_node_labels="{'region':'infra','zone':'default'}" +192.168.10.14 +192.168.10.15 +192.168.10.16 +---- + +. After completing the cluster installation per +xref:running-the-advanced-installation[Running the Advanced Installation], run +the following from a master to verify the necessary objects were successfully +created: + +.. Verfiy that the GlusterFS `StorageClass` was created: ++ ---- +# oc get storageclass +NAME TYPE +glusterfs-storage kubernetes.io/glusterfs +---- + +.. Verify that the route was created: ++ +---- +# oc get routes +NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD +heketi-glusterfs-route heketi-glusterfs-default.cloudapps.example.com heketi-glusterfs None +---- ++ +[NOTE] ==== The name for the route will be `heketi-glusterfs-route` unless the default `glusterfs` value was overridden using the `openshift_glusterfs_storage_name` @@ -720,14 +982,14 @@ nodes glusterfs_registry ---- -. Add the following role variable in the `[OSEv3:vars]` section: +. Add the following role variable in the `[OSEv3:vars]` section to enable the +GlusterFS-backed registry, provided that the `glusterfs_registry` group name and +the `[glusterfs_registry]` group exist: + ---- [OSEv3:vars] -openshift_hosted_registry_storage_kind=glusterfs <1> +openshift_hosted_registry_storage_kind=glusterfs ---- -<1> Enables the GlusterFS-backed registry if the `glusterfs_registry` group name and -the `[glusterfs_registry]` group exist. . It is recommended to have at least three registry pods, so set the following role variable in the `[OSEv3:vars]` section: @@ -736,6 +998,15 @@ role variable in the `[OSEv3:vars]` section: openshift_hosted_registry_replicas=3 ---- +. If you want to specify the volume size for the GlusterFS-backed registry, set +the following role variable in `[OSEv3:vars]` section: ++ +---- +openshift_hosted_registry_storage_volume_size=10Gi +---- ++ +If unspecified, the volume size defaults to `5Gi`. + . The installer will deploy the OpenShift Container Registry pods and associated routers on nodes containing the `region=infra` label. Add this label on at least one node entry in the `[nodes]` section, otherwise the registry deployment will @@ -808,60 +1079,79 @@ environment is defined for builds. |Variable |Purpose -|`*openshift_http_proxy*` -|This variable specifies the `*HTTP_PROXY*` environment variable for masters and +|`openshift_http_proxy` +|This variable specifies the `HTTP_PROXY` environment variable for masters and the Docker daemon. -|`*openshift_https_proxy*` -|This variable specifices the `*HTTPS_PROXY*` environment variable for masters +|`openshift_https_proxy` +|This variable specifices the `HTTPS_PROXY` environment variable for masters and the Docker daemon. -|`*openshift_no_proxy*` -|This variable is used to set the `*NO_PROXY*` environment variable for masters +|`openshift_no_proxy` +|This variable is used to set the `NO_PROXY` environment variable for masters and the Docker daemon. This value should be set to a comma separated list of host names or wildcard host names that should not use the defined proxy. This list will be augmented with the list of all defined {product-title} host names by default. -|`*openshift_generate_no_proxy_hosts*` +|`openshift_generate_no_proxy_hosts` |This boolean variable specifies whether or not the names of all defined OpenShift hosts and `pass:[*.cluster.local]` should be automatically appended to -the `*NO_PROXY*` list. Defaults to *true*; set it to *false* to override this +the `NO_PROXY` list. Defaults to `true`; set it to `false` to override this option. -|`*openshift_builddefaults_http_proxy*` -|This variable defines the `*HTTP_PROXY*` environment variable inserted into -builds using the `*BuildDefaults*` admission controller. If -`*openshift_http_proxy*` is set, this variable will inherit that value; you only +|`openshift_builddefaults_http_proxy` +|This variable defines the `HTTP_PROXY` environment variable inserted into +builds using the `BuildDefaults` admission controller. If +`openshift_http_proxy` is set, this variable will inherit that value; you only need to set this if you want your builds to use a different value. -|`*openshift_builddefaults_https_proxy*` +|`openshift_builddefaults_https_proxy` |This variable defines the `*HTTPS_PROXY*` environment variable inserted into builds using the `*BuildDefaults*` admission controller. If `*openshift_https_proxy*` is set, this variable will inherit that value; you only need to set this if you want your builds to use a different value. -|`*openshift_builddefaults_no_proxy*` -|This variable defines the `*NO_PROXY*` environment variable inserted into -builds using the `*BuildDefaults*` admission controller. If -`*openshift_no_proxy*` is set, this variable will inherit that value; you only +|`openshift_builddefaults_no_proxy` +|This variable defines the `NO_PROXY` environment variable inserted into +builds using the `BuildDefaults` admission controller. If +`openshift_no_proxy` is set, this variable will inherit that value; you only need to set this if you want your builds to use a different value. -|`*openshift_builddefaults_git_http_proxy*` +|`openshift_builddefaults_git_http_proxy` |This variable defines the HTTP proxy used by `git clone` operations during a -build, defined using the `*BuildDefaults*` admission controller. If -`*openshift_builddefaults_http_proxy*` is set, this variable will inherit that +build, defined using the `BuildDefaults` admission controller. If +`openshift_builddefaults_http_proxy` is set, this variable will inherit that value; you only need to set this if you want your `git clone` operations to use a different value. -|`*openshift_builddefaults_git_https_proxy*` +|`openshift_builddefaults_git_https_proxy` |This variable defines the HTTPS proxy used by `git clone` operations during a -build, defined using the `*BuildDefaults*` admission controller. If -`*openshift_builddefaults_https_proxy*` is set, this variable will inherit that +build, defined using the `BuildDefaults` admission controller. If +`openshift_builddefaults_https_proxy` is set, this variable will inherit that value; you only need to set this if you want your `git clone` operations to use a different value. |=== +[[advanced-install-no-proxy-list]] +If any of: + +- `openshift_no_proxy` +- `openshift_https_proxy` +- `openshift_http_proxy` + +are set, then all cluster hosts will have an automatically generated `NO_PROXY` +environment variable injected into several service configuration scripts. The +default `.svc` domain and your cluster's `dns_domain` (typically +`.cluster.local`) will also be added. + +[NOTE] +==== +Setting `openshift_generate_no_proxy_hosts` to `false` in your inventory will +not disable the automatic addition of the `.svc` domain and the cluster domain. +These are required and added automatically if any of the above listed proxy +parameters are set. +==== ifdef::openshift-enterprise,openshift-origin[] [[advanced-install-configuring-firewalls]] @@ -898,7 +1188,7 @@ os_firewall_use_firewalld=True endif::[] [[marking-masters-as-unschedulable-nodes]] -=== Marking Masters as Unschedulable Nodes +=== Configuring Schedulability on Masters Any hosts you designate as masters during the installation process should also be configured as nodes so that the masters are configured as part of the @@ -918,19 +1208,89 @@ You can manually set a master host to schedulable during installation using the `openshift_schedulable=true` host variable, though this is not recommended in production environments: -However, in order to ensure that your masters are not burdened with running -pods, you can make them -xref:../../admin_guide/manage_nodes.adoc#marking-nodes-as-unschedulable-or-schedulable[unschedulable] -by adding the `*openshift_scheduleable=false*` option any node that is also a -master. For example: +---- +[nodes] +master.example.com openshift_schedulable=true +---- + +If you want to change the schedulability of a host post-installation, see +xref:../../admin_guide/manage_nodes.adoc#marking-nodes-as-unschedulable-or-schedulable[Marking Nodes as Unschedulable or Schedulable]. + +[[configuring-node-host-labels]] +=== Configuring Node Host Labels + +You can assign +xref:../../architecture/core_concepts/pods_and_services.adoc#labels[labels] to +node hosts during the Ansible install by configuring the *_/etc/ansible/hosts_* +file. Labels are useful for determining the placement of pods onto nodes using +the xref:../../admin_guide/scheduling/scheduler.adoc#configurable-predicates[scheduler]. +Other than `region=infra` (discussed in +xref:configuring-dedicated-infrastructure-nodes[Configuring Dedicated Infrastructure Nodes]), the actual label names and values are arbitrary and can +be assigned however you see fit per your cluster's requirements. + +To assign labels to a node host during an Ansible install, use the +`openshift_node_labels` variable with the desired labels added to the desired +node host entry in the `[nodes]` section. In the following example, labels are +set for a region called `primary` and a zone called `east`: + +---- +[nodes] +node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" +---- + +[[configuring-dedicated-infrastructure-nodes]] +==== Configuring Dedicated Infrastructure Nodes + +The `openshift_router_selector` and `openshift_registry_selector` Ansible +settings determine the label selectors used when placing registry and router +pods. They are set to `region=infra` by default: + +---- +# default selectors for router and registry services +# openshift_router_selector='region=infra' +# openshift_registry_selector='region=infra' +---- + +The registry and router are only able to run on node hosts with the `region=infra` label. +Ensure that at least one node host in your {product-title} environment has the `region=infra` label. For example: -==== ---- [nodes] -master.example.com openshift_node_labels="{'region':'infra','zone':'default'}" openshift_schedulable=false +infra-node1.example.com openshift_node_labels="{'region': 'infra','zone': 'default'}" ---- + +[IMPORTANT] +==== +If there is not a node in the [nodes] section that matches the selector settings, +the default router and registry will be deployed as failed with `Pending` status. ==== +It is recommended for production environments that you maintain dedicated +infrastructure nodes where the registry and router pods can run separately from +pods used for user applications. + +If you do not intend to use {product-title} to manage the registry and router, +configure the following Ansible settings: + +---- +openshift_hosted_manage_registry=false +openshift_hosted_manage_router=false +---- + +If you are using an image registry other than the default `registry.access.redhat.com`, +you need to xref:advanced-install-configuring-registry-location[specify the desired registry] +in the *_/etc/ansible/hosts_* file. + +As described in xref:marking-masters-as-unschedulable-nodes[Configuring +Schedulability on Masters], master hosts are marked unschedulable by default. If +you label a master host with `region=infra` and have no other dedicated +infrastructure nodes, you must also explicitly mark these master hosts as +schedulable. Otherwise, the registry and router pods cannot be placed anywhere: + +---- +[nodes] +master.example.com openshift_node_labels="{'region': 'infra','zone': 'default'}" openshift_schedulable=true +---- [[advanced-install-session-options]] === Configuring Session Options @@ -946,12 +1306,10 @@ re-created if deleted on all masters. You can set the session name and maximum number of seconds with `*openshift_master_session_name*` and `*openshift_master_session_max_seconds*`: -==== ---- openshift_master_session_name=ssn openshift_master_session_max_seconds=3600 ---- -==== If provided, `*openshift_master_session_auth_secrets*` and `*openshift_master_encryption_secrets*` must be equal length. @@ -959,26 +1317,22 @@ If provided, `*openshift_master_session_auth_secrets*` and For `*openshift_master_session_auth_secrets*`, used to authenticate sessions using HMAC, it is recommended to use secrets with 32 or 64 bytes: -==== ---- openshift_master_session_auth_secrets=['DONT+USE+THIS+SECRET+b4NV+pmZNSO'] ---- -==== For `*openshift_master_encryption_secrets*`, used to encrypt sessions, secrets must be 16, 24, or 32 characters long, to select AES-128, AES-192, or AES-256: -==== ---- openshift_master_session_encryption_secrets=['DONT+USE+THIS+SECRET+b4NV+pmZNSO'] ---- -==== [[advanced-install-custom-certificates]] === Configuring Custom Certificates xref:../../install_config/certificate_customization.adoc#install-config-certificate-customization[Custom serving -certificates] for the public host names of the OpenShift API and +certificates] for the public host names of the {product-title} API and xref:../../architecture/infrastructure_components/web_console.adoc#architecture-infrastructure-components-web-console[web console] can be deployed during an advanced installation and are configurable in the inventory file. @@ -997,11 +1351,9 @@ internal `*masterURL*` host. Certificate and key file paths can be configured using the `*openshift_master_named_certificates*` cluster variable: -==== ---- openshift_master_named_certificates=[{"certfile": "/path/to/custom1.crt", "keyfile": "/path/to/custom1.key"}] ---- -==== File paths must be local to the system where Ansible will be run. Certificates are copied to master hosts and are deployed within the @@ -1011,11 +1363,9 @@ Ansible detects a certificate's `Common Name` and `Subject Alternative Names`. Detected names can be overridden by providing the `*"names"*` key when setting `*openshift_master_named_certificates*`: -==== ---- openshift_master_named_certificates=[{"certfile": "/path/to/custom1.crt", "keyfile": "/path/to/custom1.key", "names": ["public-master-host.com"]}] ---- -==== Certificates configured using `*openshift_master_named_certificates*` are cached on masters, meaning that each additional Ansible run with a different set of @@ -1026,242 +1376,805 @@ If you would like `*openshift_master_named_certificates*` to be overwritten with the provided value (or no value), specify the `*openshift_master_overwrite_named_certificates*` cluster variable: -==== ---- openshift_master_overwrite_named_certificates=true ---- -==== For a more complete example, consider the following cluster variables in an inventory file: -==== ---- openshift_master_cluster_method=native openshift_master_cluster_hostname=lb.openshift.com openshift_master_cluster_public_hostname=custom.openshift.com ---- -==== To overwrite the certificates on a subsequent Ansible run, you could set the following: -==== ---- -openshift_master_named_certificates=[{"certfile": "/root/STAR.openshift.com.crt", "keyfile": "/root/STAR.openshift.com.key"}, "names": ["custom.openshift.com"]}] +openshift_master_named_certificates=[{"certfile": "/root/STAR.openshift.com.crt", "keyfile": "/root/STAR.openshift.com.key", "names": ["custom.openshift.com"]}] openshift_master_overwrite_named_certificates=true ---- -==== -[[single-master]] -== Single Master Examples +[[advanced-install-config-certificate-validity]] +=== Configuring Certificate Validity -You can configure an environment with a single master and multiple nodes, and -either a single embedded *etcd* or multiple external *etcd* hosts. +By default, the certificates used to govern the etcd, master, and kubelet expire +after two to five years. The validity (length in days until they expire) for the +auto-generated registry, CA, node, and master certificates can be configured +during installation using the following variables (default values shown): -[NOTE] -==== -Moving from a single master cluster to multiple masters after installation is -not supported. -==== +---- +[OSEv3:vars] -[[single-master-multi-node]] -*Single Master and Multiple Nodes* +openshift_hosted_registry_cert_expire_days=730 +openshift_ca_cert_expire_days=1825 +openshift_node_cert_expire_days=730 +openshift_master_cert_expire_days=730 +etcd_ca_default_days=1825 +---- -The following table describes an example environment for a single -xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc#master[master] (with embedded *etcd*) -and two -xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc#node[nodes]: +These values are also used when +xref:../../install_config/redeploying_certificates.adoc#install-config-redeploying-certificates[redeploying certificates] via Ansible post-installation. -[options="header"] -|=== +[[advanced-install-cluster-metrics]] +=== Configuring Cluster Metrics -|Host Name |Infrastructure Component to Install +Cluster metrics are not set to automatically deploy by default. Set the +following to enable cluster metrics when using the advanced install: -|*master.example.com* -|Master and node +---- +[OSEv3:vars] -|*node1.example.com* -.2+.^|Node +openshift_metrics_install_metrics=true +---- -|*node2.example.com* -|=== +The {product-title} web console uses the data coming from the Hawkular Metrics +service to display its graphs. The metrics public URL can be set during cluster +installation using the `openshift_metrics_hawkular_hostname` Ansible variable, +which defaults to: -You can see these example hosts present in the *[masters]* and *[nodes]* -sections of the following example inventory file: +`\https://hawkular-metrics.{{openshift_master_default_subdomain}}/hawkular/metrics` -.Single Master and Multiple Nodes Inventory File -==== +If you alter this variable, ensure the host name is accessible via your router. ----- -# Create an OSEv3 group that contains the masters and nodes groups -[OSEv3:children] -masters -nodes +[[advanced-install-cluster-metrics-storage]] +==== Configuring Metrics Storage -# Set variables common for all OSEv3 hosts +The `openshift_metrics_cassandra_storage_type` variable must be set in order to +use persistent storage for metrics. If +`openshift_metrics_cassandra_storage_type` is not set, then cluster metrics data +is stored in an `emptyDir` volume, which will be deleted when the Cassandra pod +terminates. + +There are three options for enabling cluster metrics storage when using the +advanced install: + +[discrete] +[[advanced-install-cluster-metrics-storage-nfs-host-group]] +===== Option A: NFS Host Group + +When the following variables are set, an NFS volume is created during an +advanced install with path *_/_* on the host within +the `[nfs]` host group. For example, the volume path using these options would +be *_/exports/metrics_*: + +---- [OSEv3:vars] -# SSH user, this user should allow ssh based auth without requiring a password -ansible_ssh_user=root -# If ansible_ssh_user is not root, ansible_sudo must be set to true -#ansible_sudo=true +openshift_metrics_storage_kind=nfs +openshift_metrics_storage_access_modes=['ReadWriteOnce'] +openshift_metrics_storage_nfs_directory=/exports +openshift_metrics_storage_nfs_options='*(rw,root_squash)' +openshift_metrics_storage_volume_name=metrics +openshift_metrics_storage_volume_size=10Gi +---- -ifdef::openshift-enterprise[] -deployment_type=openshift-enterprise -endif::[] -ifdef::openshift-origin[] -deployment_type=origin -endif::[] +[discrete] +[[advanced-install-cluster-metrics-storage-external-nfs]] +===== Option B: External NFS Host -# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider -#openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] +To use an external NFS volume, one must already exist with a path of +*_/_* on the storage host. -# host group for masters -[masters] -master.example.com +---- +[OSEv3:vars] -# host group for nodes, includes region info -[nodes] -master.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" -node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" -node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" +openshift_metrics_storage_kind=nfs +openshift_metrics_storage_access_modes=['ReadWriteOnce'] +openshift_metrics_storage_host=nfs.example.com +openshift_metrics_storage_nfs_directory=/exports +openshift_metrics_storage_volume_name=metrics +openshift_metrics_storage_volume_size=10Gi ---- -==== -To use this example, modify the file to match your environment and -specifications, and save it as *_/etc/ansible/hosts_*. +The remote volume path using the following options would be +*_nfs.example.com:/exports/metrics_*. -[[single-master-multi-etcd-multi-node]] -*Single Master, Multiple etcd, and Multiple Nodes* +[discrete] +[[advanced-install-cluster-metrics-storage-dynamic]] +===== Option C: Dynamic -The following table describes an example environment for a single -xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc#master[master], -three -xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc#master[*etcd*] -hosts, and two -xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc#node[nodes]: +Use the following variable if your {product-title} environment supports +xref:../../install_config/persistent_storage/dynamically_provisioning_pvs.adoc#install-config-persistent-storage-dynamically-provisioning-pvs[dynamic volume provisioning] for your cloud provider: -[options="header"] -|=== +---- +[OSEv3:vars] -|Host Name |Infrastructure Component to Install +openshift_metrics_cassandra_storage_type=dynamic +---- -|*master.example.com* -|Master and node +[[advanced-install-cluster-logging]] +=== Configuring Cluster Logging -|*etcd1.example.com* -.3+.^|*etcd* +Cluster logging is not set to automatically deploy by default. Set the +following to enable cluster logging when using the advanced installation method: -|*etcd2.example.com* +---- +[OSEv3:vars] -|*etcd3.example.com* +openshift_logging_install_logging=true +---- -|*node1.example.com* -.2+.^|Node +[[advanced-installation-logging-storage]] +==== Configuring Logging Storage -|*node2.example.com* -|=== +The `openshift_logging_storage_kind` variable must be set in order to use +persistent storage for logging. If `openshift_logging_storage_kind` is +not set, then cluster logging data is stored in an `emptyDir` volume, which will +be deleted when the Elasticsearch pod terminates. -[NOTE] -==== -When specifying multiple *etcd* hosts, external *etcd* is installed and -configured. Clustering of OpenShift's embedded *etcd* is not supported. -==== +There are three options for enabling cluster logging storage when using the +advanced install: -You can see these example hosts present in the *[masters]*, *[nodes]*, and -*[etcd]* sections of the following example inventory file: +[discrete] +[[advanced-installation-logging-storage-nfs-host-group]] +===== Option A: NFS Host Group -.Single Master, Multiple etcd, and Multiple Nodes Inventory File -==== +When the following variables are set, an NFS volume is created during an +advanced install with path *_/_* on the host within +the `[nfs]` host group. For example, the volume path using these options would be +*_/exports/logging_*: ---- -# Create an OSEv3 group that contains the masters, nodes, and etcd groups -[OSEv3:children] -masters -nodes -etcd - -# Set variables common for all OSEv3 hosts [OSEv3:vars] -ansible_ssh_user=root -ifdef::openshift-enterprise[] -deployment_type=openshift-enterprise -endif::[] -ifdef::openshift-origin[] -deployment_type=origin -endif::[] -# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider -#openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] +openshift_logging_storage_kind=nfs +openshift_logging_storage_access_modes=['ReadWriteOnce'] +openshift_logging_storage_nfs_directory=/exports +openshift_logging_storage_nfs_options='*(rw,root_squash)' +openshift_logging_storage_volume_name=logging +openshift_logging_storage_volume_size=10Gi +---- -# host group for masters -[masters] -master.example.com +[discrete] +[[advanced-installation-logging-storage-external-nfs]] +===== Option B: External NFS Host -# host group for etcd -[etcd] -etcd1.example.com +To use an external NFS volume, one must already exist with a path of +*_/_* on the storage host. + +---- +[OSEv3:vars] + +openshift_logging_storage_kind=nfs +openshift_logging_storage_access_modes=['ReadWriteOnce'] +openshift_logging_storage_host=nfs.example.com +openshift_logging_storage_nfs_directory=/exports +openshift_logging_storage_volume_name=logging +openshift_logging_storage_volume_size=10Gi +---- + +The remote volume path using the following options would be +*_nfs.example.com:/exports/logging_*. + +[discrete] +[[advanced-installation-logging-storage-dynamic]] +===== Option C: Dynamic + +Use the following variable if your {product-title} environment supports +xref:../../install_config/persistent_storage/dynamically_provisioning_pvs.adoc#install-config-persistent-storage-dynamically-provisioning-pvs[dynamic volume provisioning] for your cloud provider: + +---- +[OSEv3:vars] + +openshift_logging_storage_kind=dynamic +---- + +[[enabling-service-catalog]] +=== Enabling the Service Catalog + +[NOTE] +==== +Enabling the service catalog is a Technology Preview feature only. +ifdef::openshift-enterprise[] +Technology Preview features are not +supported with Red Hat production service level agreements (SLAs), might not be +functionally complete, and Red Hat does not recommend to use them for +production. These features provide early access to upcoming product features, +enabling customers to test functionality and provide feedback during the +development process. + +For more information on Red Hat Technology Preview features support scope, see +https://access.redhat.com/support/offerings/techpreview/. +endif::[] +==== + +Enabling the +xref:../../architecture/service_catalog/index.adoc#architecture-additional-concepts-service-catalog[service catalog] allows service brokers to be registered with the catalog. The web +console is also configured to enable an updated landing page for browsing the +catalog. + +To enable the service catalog, add the following in your inventory file's +`[OSEv3:vars]` section: + +---- +openshift_enable_service_catalog=true +ifdef::openshift-origin[] +openshift_service_catalog_image_prefix=openshift/origin- +openshift_service_catalog_image_version=latest +endif::[] +---- + +When the service catalog is enabled, the web console shows the updated landing +page but still uses the normal image stream and template behavior. The Ansible +service broker is also enabled; see +xref:configuring-ansible-service-broker[Configuring the Ansible Service Broker] +for more details. The template service broker (TSB) is not deployed by default; +see xref:configuring-template-service-broker[Configuring the Template Service Broker] for more information. + +[[configuring-ansible-service-broker]] +=== Configuring the Ansible Service Broker + +[NOTE] +==== +Enabling the Ansible service broker is a Technology Preview feature only. +ifdef::openshift-enterprise[] +Technology Preview features are not +supported with Red Hat production service level agreements (SLAs), might not be +functionally complete, and Red Hat does not recommend to use them for +production. These features provide early access to upcoming product features, +enabling customers to test functionality and provide feedback during the +development process. + +For more information on Red Hat Technology Preview features support scope, see +https://access.redhat.com/support/offerings/techpreview/. +endif::[] +==== + +If you have xref:enabling-service-catalog[enabled the service catalog], the +xref:../../architecture/service_catalog/ansible_service_broker.adoc#arch-ansible-service-broker[Ansible service broker] (ASB) is also enabled. + +The ASB deploys its own etcd instance separate from the etcd used by the rest of +the {product-title} cluster. The ASB's etcd instance requires separate storage +using persistent volumes (PVs) to function. If no PV is available, etcd will +wait until the PV can be satisfied. The ASB application will enter a `CrashLoop` +state until its etcd instance is available. + +[NOTE] +==== +The following example shows usage of an NFS host to provide the required PVs, +but +xref:../../install_config/persistent_storage/index.adoc#install-config-persistent-storage-index[other persistent storage providers] can be used instead. +==== + +Some Ansible playbook bundles (APBs) may also require a PV for their own usage. +Two APBs are currently provided with {product-title} 3.6: MediaWiki and +PostgreSQL. Both of these require their own PV to deploy. + +To configure the ASB: + +. In your inventory file, add `nfs` to the `[OSEv3:children]` section to enable +the `[nfs]` group: ++ +---- +[OSEv3:children] +masters +nodes +nfs +---- + +. Add a `[nfs]` group section and add the host name for the system that will +be the NFS host: ++ +---- +[nfs] +master1.example.com +---- + +. In addition to the settings from xref:enabling-service-catalog[Enabling the +Service Catalog], add the following in the `[OSEv3:vars]` +section: ++ +---- +openshift_hosted_etcd_storage_kind=nfs +openshift_hosted_etcd_storage_nfs_options="*(rw,root_squash,sync,no_wdelay)" +openshift_hosted_etcd_storage_nfs_directory=/opt/osev3-etcd <1> +openshift_hosted_etcd_storage_volume_name=etcd-vol2 <1> +openshift_hosted_etcd_storage_access_modes=["ReadWriteOnce"] +openshift_hosted_etcd_storage_volume_size=1G +openshift_hosted_etcd_storage_labels={'storage': 'etcd'} + +ifdef::openshift-origin[] +ansible_service_broker_image_prefix=openshift/ +ansible_service_broker_registry_url="registry.access.redhat.com" +ansible_service_broker_registry_user= <2> +ansible_service_broker_registry_password= <2> +ansible_service_broker_registry_organization= <2> +endif::[] +---- +<1> An NFS volume will be created with path `/` on the +host within the `[nfs]` group. For example, the volume path using these options +would be *_/opt/osev3-etcd/etcd-vol2_*. +ifdef::openshift-origin[] +<2> Only required if `ansible_service_broker_registry_url` is set to a registry that +requires authentication for pulling APBs. +endif::[] ++ +These settings create a persistent volume that is attached to the ASB's etcd +instance during cluster installation. + +[[configuring-template-service-broker]] +=== Configuring the Template Service Broker + +[NOTE] +==== +Enabling the template service broker is a Technology Preview feature only. +ifdef::openshift-enterprise[] +Technology Preview features are not +supported with Red Hat production service level agreements (SLAs), might not be +functionally complete, and Red Hat does not recommend to use them for +production. These features provide early access to upcoming product features, +enabling customers to test functionality and provide feedback during the +development process. + +For more information on Red Hat Technology Preview features support scope, see +https://access.redhat.com/support/offerings/techpreview/. +endif::[] +==== + +If you have xref:enabling-service-catalog[enabled the service catalog], you can +also enable the +xref:../../architecture/service_catalog/template_service_broker.adoc#arch-template-service-broke[template service broker] (TSB). + +To configure the TSB: + +. One or more projects must be defined as the broker's source +namespace(s) for loading templates and image streams into the service catalog. +Set the desired projects by modifying the following in your inventory file's +`[OSEv3:vars]` section: ++ +---- +openshift_template_service_broker_namespaces=['openshift','myproject'] +---- + +. The installer currently does not automate installation of the TSB, so additional +steps must be run manually after the cluster installation has completed. +Continue with the rest of the preparation of your inventory file, then see +xref:running-the-advanced-installation[Running the Advanced Installation] for +the additional steps to deploy the TSB. + +[[configuring-web-console-customization]] +=== Configuring Web Console Customization + +The following Ansible variables set master configuration options for customizing +the web console. See +xref:../../install_config/web_console_customization.adoc#install-config-web-console-customization[Customizing the Web Console] for more details on these customization options. + +.Web Console Customization Variables +[options="header"] +|=== + +|Variable |Purpose + +|`openshift_master_logout_url` +|Sets `logoutURL` in the master configuration. See xref:../../install_config/web_console_customization.adoc#changing-the-logout-url[Changing the Logout URL] for details. Example value: `\http://example.com` + +|`openshift_master_extension_scripts` +|Sets `extensionScripts` in the master configuration. See xref:../../install_config/web_console_customization.adoc#loading-custom-scripts-and-stylesheets[Loading Extension Scripts and Stylesheets] for details. Example value: `['/path/to/script1.js','/path/to/script2.js']` + +|`openshift_master_extension_stylesheets` +|Sets `extensionStylesheets` in the master configuration. See xref:../../install_config/web_console_customization.adoc#loading-custom-scripts-and-stylesheets[Loading Extension Scripts and Stylesheets] for details. Example value: `['/path/to/stylesheet1.css','/path/to/stylesheet2.css']` + +|`openshift_master_extensions` +|Sets `extensions` in the master configuration. See xref:../../install_config/web_console_customization.adoc#serving-static-files[Serving Static Files] and xref:../../install_config/web_console_customization.adoc#customizing-the-about-page[Customizing the About Page] for details. Example value: `[{'name': 'images', 'sourceDirectory': '/path/to/my_images'}]` + +|`openshift_master_oauth_template` +|Sets the OAuth template in the master configuration. See xref:../../install_config/web_console_customization.adoc#customizing-the-login-page[Customizing the Login Page] for details. Example value: `['/path/to/login-template.html']` + +|`openshift_master_metrics_public_url` +|Sets `metricsPublicURL` in the master configuration. See xref:../../install_config/cluster_metrics.adoc#install-setting-the-metrics-public-url[Setting the Metrics Public URL] for details. Example value: `\https://hawkular-metrics.example.com/hawkular/metrics` + +|`openshift_master_logging_public_url` +|Sets `loggingPublicURL` in the master configuration. See xref:../../install_config/aggregate_logging.adoc#aggregate-logging-kibana[Kibana] for details. Example value: `\https://kibana.example.com` + +|=== + +[[adv-install-example-inventory-files]] +== Example Inventory Files + +[[single-master]] +=== Single Master Examples + +You can configure an environment with a single master and multiple nodes, and +either a single or multiple number of external *etcd* hosts. + +[NOTE] +==== +Moving from a single master cluster to multiple masters after installation is +not supported. +==== + +[discrete] +[[single-master-multi-node-ai]] +==== Single Master and Multiple Nodes + +The following table describes an example environment for a single +xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc#master[master] (with *etcd* on the same host) +and two +xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc#node[nodes]: + +[options="header"] +|=== + +|Host Name |Infrastructure Component to Install + +|*master.example.com* +|Master and node + +|*master.example.com* +|etcd + +|*node1.example.com* +.2+.^|Node + +|*node2.example.com* +|=== + +You can see these example hosts present in the *[masters]* and *[nodes]* +sections of the following example inventory file: + +.Single Master and Multiple Nodes Inventory File +---- +# Create an OSEv3 group that contains the masters and nodes groups +[OSEv3:children] +masters +nodes + +# Set variables common for all OSEv3 hosts +[OSEv3:vars] +# SSH user, this user should allow ssh based auth without requiring a password +ansible_ssh_user=root + +# If ansible_ssh_user is not root, ansible_become must be set to true +#ansible_become=true + +ifdef::openshift-enterprise[] +openshift_deployment_type=openshift-enterprise +endif::[] +ifdef::openshift-origin[] +openshift_deployment_type=origin +endif::[] + +# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider +#openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] + +# host group for masters +[masters] +master.example.com + +# host group for etcd +[etcd] +master.example.com + +# host group for nodes, includes region info +[nodes] +master.example.com +node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" +node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" +infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" +infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" +---- + +To use this example, modify the file to match your environment and +specifications, and save it as *_/etc/ansible/hosts_*. + +[discrete] +[[single-master-multi-etcd-multi-node-ai]] +==== Single Master, Multiple etcd, and Multiple Nodes + +The following table describes an example environment for a single +xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc#master[master], +three +xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc#master[*etcd*] +hosts, and two +xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc#node[nodes]: + +[options="header"] +|=== + +|Host Name |Infrastructure Component to Install + +|*master.example.com* +|Master and node + +|*etcd1.example.com* +.3+.^|*etcd* + +|*etcd2.example.com* + +|*etcd3.example.com* + +|*node1.example.com* +.2+.^|Node + +|*node2.example.com* +|=== + +[NOTE] +==== +When specifying multiple *etcd* hosts, external *etcd* is installed and +configured. Clustering of {product-title}'s embedded *etcd* is not supported. +==== + +You can see these example hosts present in the *[masters]*, *[nodes]*, and +*[etcd]* sections of the following example inventory file: + +.Single Master, Multiple etcd, and Multiple Nodes Inventory File + +---- +# Create an OSEv3 group that contains the masters, nodes, and etcd groups +[OSEv3:children] +masters +nodes +etcd + +# Set variables common for all OSEv3 hosts +[OSEv3:vars] +ansible_ssh_user=root +ifdef::openshift-enterprise[] +openshift_deployment_type=openshift-enterprise +endif::[] +ifdef::openshift-origin[] +openshift_deployment_type=origin +endif::[] + +# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider +#openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] + +# host group for masters +[masters] +master.example.com + +# host group for etcd +[etcd] +etcd1.example.com +etcd2.example.com +etcd3.example.com + +# host group for nodes, includes region info +[nodes] +master.example.com +node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" +node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" +infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" +infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" +---- + +To use this example, modify the file to match your environment and +specifications, and save it as *_/etc/ansible/hosts_*. + +[[multiple-masters]] +=== Multiple Masters Examples + +You can configure an environment with multiple masters, multiple *etcd* hosts, +and multiple nodes. Configuring +xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc#high-availability-masters[multiple +masters for high availability] (HA) ensures that the cluster has no single point +of failure. + +[NOTE] +==== +Moving from a single master cluster to multiple masters after installation is +not supported. +==== + +When configuring multiple masters, the advanced installation supports the following high +availability (HA) method: + +[cols="1,5"] +|=== +|`native` +|Leverages the native HA master capabilities built into {product-title} and can be +combined with any load balancing solution. If a host is defined in the *[lb]* +section of the inventory file, Ansible installs and configures HAProxy +automatically as the load balancing solution. If no host is defined, it is +assumed you have pre-configured an external load balancing solution of your choice to +balance the master API (port 8443) on all master hosts. +|=== + +[NOTE] +==== +This HAProxy load balancer is intended to demonstrate the API server's HA mode +and is not recommended for production environments. If you are deploying to a cloud provider, Red Hat recommends deploying + a cloud-native TCP-based load balancer or take other steps to provide a highly available load balancer. +==== + +For an external load balancing solution, you must have: + +* A pre-created load balancer VIP configured for SSL passthrough. +* A VIP listening on the port specified by the xref:advanced-master-ports[`openshift_master_api_port`] and xref:advanced-master-ports[`openshift_master_console_port`] +values (8443 by default) and proxying back to all master hosts on that port. +* A domain name for VIP registered in DNS. +** The domain name will become the value of both +`openshift_master_cluster_public_hostname` and +`openshift_master_cluster_hostname` in the {product-title} installer. + +[NOTE] +==== +This HAProxy load balancer is intended to demonstrate the API server's HA mode +and is not recommended for production environments. If you are deploying to a cloud provider we recommend +that you deploy a cloud-native TCP-based load balancer or take other steps to provide a highly available load balancer. +==== + +See +link:https://github.com/redhat-cop/openshift-playbooks/blob/master/playbooks/installation/load_balancing.adoc[External +Load Balancer Integrations] for more information. + +[NOTE] +==== +For more on the high availability master architecture, see +xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc#master[Kubernetes +Infrastructure]. +==== + +Note the following when using the `native` HA method: + +- The advanced installation method does not currently support multiple HAProxy +load balancers in an active-passive setup. See the +https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Load_Balancer_Administration/ch-lvs-overview-VSA.html[Load +Balancer Administration documentation] for post-installation amendments. +- In a HAProxy setup, controller manager servers run as standalone processes. +They elect their active leader with a lease stored in *etcd*. The lease +expires after 30 seconds by default. If a failure happens on an active +controller server, it will take up to this number of seconds to elect another +leader. The interval can be configured with the `*osm_controller_lease_ttl*` +variable. + +To configure multiple masters, refer to the following section. + +[discrete] +[[multi-masters-using-native-ha-ai]] +==== Multiple Masters with Multiple etcd + +The following describes an example environment for three +xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc#master[masters], +one HAProxy load balancer, three +xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc#master[*etcd*] +hosts, and two +xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc#node[nodes] +using the `native` HA method: + +[options="header"] +|=== + +|Host Name |Infrastructure Component to Install + +|*master1.example.com* +.3+.^|Master (clustered using native HA) and node + +|*master2.example.com* + +|*master3.example.com* + +|*lb.example.com* +|HAProxy to load balance API master endpoints + +|*etcd1.example.com* +.3+.^|*etcd* + +|*etcd2.example.com* + +|*etcd3.example.com* + +|*node1.example.com* +.2+.^|Node + +|*node2.example.com* +|=== + +[NOTE] +==== +When specifying multiple *etcd* hosts, external *etcd* is installed and +configured. Clustering of {product-title}'s embedded *etcd* is not supported. +==== + +You can see these example hosts present in the *[masters]*, *[etcd]*, *[lb]*, +and *[nodes]* sections of the following example inventory file: + +.Multiple Masters Using HAProxy Inventory File +==== + +---- +# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups. +# The lb group lets Ansible configure HAProxy as the load balancing solution. +# Comment lb out if your load balancer is pre-configured. +[OSEv3:children] +masters +nodes +etcd +lb + +# Set variables common for all OSEv3 hosts +[OSEv3:vars] +ansible_ssh_user=root +ifdef::openshift-enterprise[] +openshift_deployment_type=openshift-enterprise +endif::[] +ifdef::openshift-origin[] +openshift_deployment_type=origin +endif::[] + +# Uncomment the following to enable htpasswd authentication; defaults to +# DenyAllPasswordIdentityProvider. +#openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] + +# Native high availbility cluster method with optional load balancer. +# If no lb group is defined installer assumes that a load balancer has +# been preconfigured. For installation the value of +# openshift_master_cluster_hostname must resolve to the load balancer +# or to one or all of the masters defined in the inventory if no load +# balancer is present. +openshift_master_cluster_method=native +openshift_master_cluster_hostname=openshift-cluster.example.com +openshift_master_cluster_public_hostname=openshift-cluster.example.com + +# apply updated node defaults +openshift_node_kubelet_args={'pods-per-core': ['10'], 'max-pods': ['250'], 'image-gc-high-threshold': ['90'], 'image-gc-low-threshold': ['80']} + +# override the default controller lease ttl +#osm_controller_lease_ttl=30 + +# enable ntp on masters to ensure proper failover +openshift_clock_enabled=true + +# host group for masters +[masters] +master1.example.com +master2.example.com +master3.example.com + +# host group for etcd +[etcd] +etcd1.example.com etcd2.example.com etcd3.example.com +# Specify load balancer host +[lb] +lb.example.com + # host group for nodes, includes region info [nodes] -master.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" +master[1:3].example.com node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" +infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" +infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" ---- ==== To use this example, modify the file to match your environment and specifications, and save it as *_/etc/ansible/hosts_*. -[[multiple-masters]] -== Multiple Masters Examples - -You can configure an environment with multiple masters, multiple *etcd* hosts, -and multiple nodes. Configuring -xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc#high-availability-masters[multiple -masters for high availability] (HA) ensures that the cluster has no single point -of failure. - -[NOTE] -==== -Moving from a single master cluster to multiple masters after installation is -not supported. -==== - -When configuring multiple masters, the advanced installation supports the following high -availability (HA) method: - -[cols="1,5"] -|=== -|`native` -|Leverages the native HA master capabilities built into OpenShift and can be -combined with any load balancing solution. If a host is defined in the *[lb]* -section of the inventory file, Ansible installs and configures HAProxy -automatically as the load balancing solution. If no host is defined, it is -assumed you have pre-configured a load balancing solution of your choice to -balance the master API (port 8443) on all master hosts. -|=== - -[NOTE] -==== -For more on the high availability master architecture, see -xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc#master[Kubernetes -Infrastructure]. -==== - -To configure multiple masters, refer to the following section. - -[[multi-masters-using-native-ha]] -*Multiple Masters Using Native HA* +[discrete] +[[multi-masters-single-etcd-using-native-ha]] +==== Multiple Masters with Master and etcd on the Same Host The following describes an example environment for three -xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc#master[masters], -one HAProxy load balancer, three -xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc#master[*etcd*] -hosts, and two +xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc#master[masters] with xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc#master[*etcd*] on each host, +one HAProxy load balancer, and two xref:../../architecture/infrastructure_components/kubernetes_infrastructure.adoc#node[nodes] using the `native` HA method: @@ -1271,7 +2184,7 @@ using the `native` HA method: |Host Name |Infrastructure Component to Install |*master1.example.com* -.3+.^|Master (clustered using native HA) and node +.3+.^|Master (clustered using native HA) and node with etcd on each host |*master2.example.com* @@ -1280,31 +2193,16 @@ using the `native` HA method: |*lb.example.com* |HAProxy to load balance API master endpoints -|*etcd1.example.com* -.3+.^|*etcd* - -|*etcd2.example.com* - -|*etcd3.example.com* - |*node1.example.com* .2+.^|Node |*node2.example.com* |=== -[NOTE] -==== -When specifying multiple *etcd* hosts, external *etcd* is installed and -configured. Clustering of OpenShift's embedded *etcd* is not supported. -==== - You can see these example hosts present in the *[masters]*, *[etcd]*, *[lb]*, and *[nodes]* sections of the following example inventory file: -.Multiple Masters Using HAProxy Inventory File ==== - ---- # Create an OSEv3 group that contains the master, nodes, etcd, and lb groups. # The lb group lets Ansible configure HAProxy as the load balancing solution. @@ -1318,18 +2216,13 @@ lb # Set variables common for all OSEv3 hosts [OSEv3:vars] ansible_ssh_user=root -ifdef::openshift-enterprise[] -deployment_type=openshift-enterprise -endif::[] -ifdef::openshift-origin[] -deployment_type=origin -endif::[] +openshift_deployment_type=openshift-enterprise # Uncomment the following to enable htpasswd authentication; defaults to # DenyAllPasswordIdentityProvider. #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] -# Native high availbility cluster method with optional load balancer. +# Native high availability cluster method with optional load balancer. # If no lb group is defined installer assumes that a load balancer has # been preconfigured. For installation the value of # openshift_master_cluster_hostname must resolve to the load balancer @@ -1350,9 +2243,9 @@ master3.example.com # host group for etcd [etcd] -etcd1.example.com -etcd2.example.com -etcd3.example.com +master1.example.com +master2.example.com +master3.example.com # Specify load balancer host [lb] @@ -1360,41 +2253,44 @@ lb.example.com # host group for nodes, includes region info [nodes] -master[1:3].example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" +master[1:3].example.com node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" +infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" +infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" ---- ==== To use this example, modify the file to match your environment and specifications, and save it as *_/etc/ansible/hosts_*. -Note the following when using the `native` HA method: -- The advanced installation method does not currently support multiple HAProxy -load balancers in an active-passive setup. See the -https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Load_Balancer_Administration/ch-lvs-overview-VSA.html[Load -Balancer Administration documentation] for post-installation amendments. -- In a HAProxy setup, controller manager servers run as standalone processes. -They elect their active leader with a lease stored in *etcd*. The lease -expires after 30 seconds by default. If a failure happens on an active -controller server, it will take up to this number of seconds to elect another -leader. The interval can be configured with the `*osm_controller_lease_ttl*` -variable. [[running-the-advanced-installation]] == Running the Advanced Installation After you have xref:configuring-ansible[configured Ansible] by defining an -inventory file in *_/etc/ansible/hosts_*, you can run the advanced installation -using the following playbook: +inventory file in *_/etc/ansible/hosts_*, you run the advanced installation +playbook via Ansible. {product-title} installations are currently supported +using the RPM-based installer, while the containerized installer is currently a +Technology Preview feature. + +[[running-the-advanced-installation-rpm]] +=== Running the RPM-based Installer + +The RPM-based installer uses Ansible installed via RPM packages to run playbooks +and configuration files available on the local host. To run the installer, use +the following command, specifying `-i` if your inventory file located somewhere +other than *_/etc/ansible/hosts_*: ---- ifdef::openshift-enterprise[] -# ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml +# ansible-playbook [-i /path/to/inventory] \ + /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml endif::[] ifdef::openshift-origin[] -# ansible-playbook ~/openshift-ansible/playbooks/byo/config.yml +# ansible-playbook [-i /path/to/inventory] \ + ~/openshift-ansible/playbooks/byo/config.yml endif::[] ---- @@ -1402,166 +2298,292 @@ If for any reason the installation fails, before re-running the installer, see xref:installer-known-issues[Known Issues] to check for any specific instructions or workarounds. -[[advanced-verifying-the-installation]] -== Verifying the Installation +[[running-the-advanced-installation-system-container]] +=== Running the Containerized Installer -// tag::verifying-the-installation[] -After the installation completes, verify that the master is started and nodes -are registered and reporting in *Ready* status. *On the master host*, run the -following as root: +include::install_config/install/advanced_install.adoc[tag=syscontainers_techpreview] -==== ----- -# oc get nodes +The +ifdef::openshift-enterprise[] +*openshift3/ose-ansible* +endif::[] +ifdef::openshift-origin[] +*openshift/origin-ansible* +endif::[] +image is a containerized version of the {product-title} installer that runs as a +link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/managing_containers/running_system_containers[system container]. System containers are stored and run outside of the traditional +*docker* service. Functionally, using the containerized installer is the same as +using the traditional RPM-based installer, except it is running in a +containerized environment instead of directly on the host. -NAME LABELS STATUS -master.example.com kubernetes.io/hostname=master.example.com,region=infra,zone=default Ready,SchedulingDisabled -node1.example.com kubernetes.io/hostname=node1.example.com,region=primary,zone=east Ready -node2.example.com kubernetes.io/hostname=node2.example.com,region=primary,zone=west Ready +. Use the Docker CLI to pull the image locally: ++ +---- +ifdef::openshift-enterprise[] +$ docker pull registry.access.redhat.com/openshift3/ose-ansible:v3.6 +endif::[] +ifdef::openshift-origin[] +$ docker pull docker.io/openshift/origin-ansible:v3.6 +endif::[] ---- -==== -// end::verifying-the-installation[] - -*Multiple etcd Hosts* - -If you installed multiple *etcd* hosts: -. On a master host, verify the *etcd* cluster health, substituting for the FQDNs -of your *etcd* hosts in the following: +. The installer system container must be stored in +link:https://access.redhat.com/documentation/en-us/red_hat_satellite/6.2/html/content_management_guide/managing_ostree_content[OSTree] +instead of defaulting to *docker* daemon storage. Use the Atomic CLI to import +the installer image from the local *docker* engine to OSTree storage: + -==== ---- -# etcdctl -C \ - https://etcd1.example.com:2379,https://etcd2.example.com:2379,https://etcd3.example.com:2379 \ - --ca-file=/etc/origin/master/master.etcd-ca.crt \ - --cert-file=/etc/origin/master/master.etcd-client.crt \ - --key-file=/etc/origin/master/master.etcd-client.key cluster-health +$ atomic pull --storage ostree \ +ifdef::openshift-enterprise[] + docker:registry.access.redhat.com/openshift3/ose-ansible:v3.6 +endif::[] +ifdef::openshift-origin[] + docker:docker.io/openshift/origin-ansible:v3.6 +endif::[] ---- -==== -. Also verify the member list is correct: +. Install the system container so it is set up as a systemd service: + -==== ---- -# etcdctl -C \ - https://etcd1.example.com:2379,https://etcd2.example.com:2379,https://etcd3.example.com:2379 \ - --ca-file=/etc/origin/master/master.etcd-ca.crt \ - --cert-file=/etc/origin/master/master.etcd-client.crt \ - --key-file=/etc/origin/master/master.etcd-client.key member list +$ atomic install --system \ + --storage=ostree \ + --name=openshift-installer \//<1> + --set INVENTORY_FILE=/path/to/inventory \//<2> +ifdef::openshift-enterprise[] + docker:registry.access.redhat.com/openshift3/ose-ansible:v3.6 +endif::[] +ifdef::openshift-origin[] + docker:docker.io/openshift/origin-ansible:v3.6 +endif::[] ---- -==== - -*Multiple Masters Using HAProxy* +<1> Sets the name for the systemd service. +<2> Specify the location for your inventory file on your local workstation. -If you installed multiple masters using HAProxy as a load balancer, browse to -the following URL according to your *[lb]* section definition and check -HAProxy's status: +. Use the `systemctl` command to start the installer service as you would any +other systemd service. This command initiates the cluster installation: ++ +---- +$ systemctl start openshift-installer +---- ++ +If for any reason the installation fails, before re-running the installer, see +xref:installer-known-issues[Known Issues] to check for any specific instructions +or workarounds. +. After the installation completes, you can uninstall the system container if you want. However, if you need to run the installer again to run any other playbooks later, you would have to follow this procedure again. ++ +To uninstall the system container: ++ ---- -http://:9000 +$ atomic uninstall openshift-installer ---- -You can verify your installation by consulting the -https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Load_Balancer_Administration/ch-haproxy-setup-VSA.html[HAProxy -Configuration documentation]. +[[running-the-advanced-installation-system-container-other-playbooks]] +==== Running Other Playbooks -[[adding-nodes-advanced]] -== Adding Nodes to an Existing Cluster +After you have completed the cluster installation, if you want to later run any +other playbooks using the containerized installer (for example, cluster upgrade +playbooks), you can use the `PLAYBOOK_FILE` environment variable. The default +value is `playbooks/byo/config.yml`, which is the main cluster installation +playbook, but you can set it to the path of another playbook inside the +container. -After your cluster is installed, you can install additional nodes and add them -to your cluster by running the *_scaleup.yml_* playbook. This playbook queries -the master, generates and distributes new certificates for the new nodes, then -runs the configuration playbooks on the new nodes only. +For example: +---- +$ atomic install --system \ + --storage=ostree \ + --name=openshift-installer \ + --set INVENTORY_FILE=/etc/ansible/hosts \ + --set PLAYBOOK_FILE=playbooks/byo/openshift-cluster/upgrades/v3_6/upgrade.yml \//<1> ifdef::openshift-enterprise[] -This process is similar to re-running the installer in the -xref:../../install_config/install/quick_install.adoc#adding-nodes-or-reinstalling-quick[quick -installation method to add nodes], however you have more configuration options -available when using the advanced method and running the playbooks directly. + docker:registry.access.redhat.com/openshift3/ose-ansible:v3.6 endif::[] +ifdef::openshift-origin[] + docker:docker.io/openshift/origin-ansible:v3.6 +endif::[] +---- +<1> Set `PLAYBOOK_FILE` to the relative path of the playbook starting at the +*_playbooks/_* directory. Playbooks mentioned elsewhere in {product-title} +documentation assume use of the RPM-based installer, so use this relative path +instead when using the containerized installer. + +[[running-the-advanced-installation-tsb]] +=== Deploying the Template Service Broker -You must have an existing inventory file (for example, *_/etc/ansible/hosts_*) -that is representative of your current cluster configuration in order to run the -*_scaleup.yml_* playbook. +If you have xref:enabling-service-catalog[enabled the service catalog] and want +to deploy the xref:configuring-template-service-broker[template service broker] +(TSB), run the following manual steps after the cluster installation completes +successfully: + +[NOTE] +==== +The template service broker is a Technology Preview feature only. ifdef::openshift-enterprise[] -If you previously used the `atomic-openshift-installer` command to run your -installation, you can check *_~/.config/openshift/.ansible/hosts_* for the last -inventory file that the installer generated and use or modify that as needed as -your inventory file. You must then specify the file location with `-i` when -calling `ansible-playbook` later. +Technology Preview features are not +supported with Red Hat production service level agreements (SLAs), might not be +functionally complete, and Red Hat does not recommend to use them for +production. These features provide early access to upcoming product features, +enabling customers to test functionality and provide feedback during the +development process. + +For more information on Red Hat Technology Preview features support scope, see +https://access.redhat.com/support/offerings/techpreview/. endif::[] +==== -[IMPORTANT] +[WARNING] ==== -The recommended maximum number of nodes is 300. +Enabling the TSB currently requires opening unauthenticated access to the +cluster; this security issue will be resolved before exiting the Technology +Preview phase. ==== -To add nodes to an existing cluster: +. Ensure that one or more source projects for the TSB were defined via +`openshift_template_service_broker_namespaces` as described in +xref:../../install_config/install/advanced_install.adoc#configuring-template-service-broker[Configuring the Template Service Broker]. -. Ensure you have the latest playbooks by updating the *atomic-openshift-utils* -package: +. Run the following command to enable unauthenticated access for the TSB: + ---- -# yum update atomic-openshift-utils +$ oc adm policy add-cluster-role-to-group \ + system:openshift:templateservicebroker-client \ + system:unauthenticated system:authenticated ---- -. Edit your *_/etc/ansible/hosts_* file and add `new_nodes` to the -*[OSEv3:children]* section: +. Create a *_template-broker.yml_* file with the following contents: + -==== +[source,yaml] ---- -[OSEv3:children] -masters -nodes -new_nodes +apiVersion: servicecatalog.k8s.io/v1alpha1 +kind: Broker +metadata: + name: template-broker +spec: + url: https://kubernetes.default.svc:443/brokers/template.openshift.io ---- -==== -. Then, create a *[new_nodes]* section much like the existing *[nodes]* section, -specifying host information for any new nodes you want to add. For example: +. Use the file to register the broker: + -==== ---- -[nodes] -master[1:3].example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" -node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" -node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" +$ oc create -f template-broker.yml +---- -[new_nodes] -node3.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" +. Enable the Technology Preview feature in the web console to use the TSB instead +of the standard `openshift` global library behavior. + +.. Save the following script to a file (for example, *_tech-preview.js_*): ++ +[source, javascript] ---- -==== +window.OPENSHIFT_CONSTANTS.ENABLE_TECH_PREVIEW_FEATURE.template_service_broker = true; +---- + +.. Add the file to the master configuration file in +*_/etc/origin/master/master-config.yml_*: + -See xref:advanced-host-variables[Configuring Host Variables] for more options. +[source, yaml] +---- +assetConfig: + ... + extensionScripts: + - /path/to/tech-preview.js +---- -. Now run the *_scaleup.yml_* playbook. If your inventory file is located -somewhere other than the default *_/etc/ansible/hosts_*, specify the location -with the `-i option`: +.. Restart the master service: + +ifdef::openshift-origin[] ---- -# ansible-playbook [-i /path/to/file] \ - /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-node/scaleup.yml +# systemctl restart origin-master +---- +endif::[] +ifdef::openshift-enterprise[] ---- +# systemctl restart atomic-openshift-master +---- +endif::[] + +[[advanced-verifying-the-installation]] +== Verifying the Installation -. After the playbook completes successfully, -xref:advanced-verifying-the-installation[verify the installation]. +// tag::verifying-the-installation[] +After the installation completes: -. Finally, move any hosts you had defined in the *[new_nodes]* section up into -the *[nodes]* section (but leave the *[new_nodes]* section definition itself in -place) so that subsequent runs using this inventory file are aware of the nodes -but do not handle them as new nodes. For example: +. Verify that the master is started and nodes +are registered and reporting in *Ready* status. _On the master host_, run the +following as root: + -==== ---- -[nodes] -master[1:3].example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" -node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" -node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" -node3.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" +# oc get nodes -[new_nodes] +NAME STATUS AGE +master.example.com Ready,SchedulingDisabled 165d +node1.example.com Ready 165d +node2.example.com Ready 165d ---- + +. To verify that the web console is installed correctly, use the master host name +and the web console port number to access the web console with a web browser. ++ +For example, for a master host with a host name of `master.openshift.com` and +using the default port of `8443`, the web console would be found at `\https://master.openshift.com:8443/console`. + +// end::verifying-the-installation[] + +[NOTE] ==== +The default port for the console is `8443`. If this was changed during the installation, the port can be found at *openshift_master_console_port* in the *_/etc/ansible/hosts_* file. +==== + +[discrete] +[[verifying-multiple-etcd-hosts]] +==== Verifying Multiple etcd Hosts + +If you installed multiple *etcd* hosts: + +. First, verify that the *etcd* package, which provides the `etcdctl` +command, is installed: ++ +---- +# yum install etcd +---- + +. On a master host, verify the *etcd* cluster health, substituting for the FQDNs +of your *etcd* hosts in the following: ++ +---- +# etcdctl -C \ + https://etcd1.example.com:2379,https://etcd2.example.com:2379,https://etcd3.example.com:2379 \ + --ca-file=/etc/origin/master/master.etcd-ca.crt \ + --cert-file=/etc/origin/master/master.etcd-client.crt \ + --key-file=/etc/origin/master/master.etcd-client.key cluster-health +---- + +. Also verify the member list is correct: ++ +---- +# etcdctl -C \ + https://etcd1.example.com:2379,https://etcd2.example.com:2379,https://etcd3.example.com:2379 \ + --ca-file=/etc/origin/master/master.etcd-ca.crt \ + --cert-file=/etc/origin/master/master.etcd-client.crt \ + --key-file=/etc/origin/master/master.etcd-client.key member list +---- + +[discrete] +[[verifying-multiple-masters-haproxy]] +==== Verifying Multiple Masters Using HAProxy + +If you installed multiple masters using HAProxy as a load balancer, browse to +the following URL according to your *[lb]* section definition and check +HAProxy's status: + +---- +http://:9000 +---- + +You can verify your installation by consulting the +https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Load_Balancer_Administration/ch-haproxy-setup-VSA.html[HAProxy +Configuration documentation]. [[uninstalling-advanced]] == Uninstalling {product-title} @@ -1614,7 +2636,6 @@ in this procedure. . Create a different inventory file that only references those hosts. For example, to only delete content from one node: + -==== ---- [OSEv3:children] nodes <1> @@ -1622,10 +2643,10 @@ nodes <1> [OSEv3:vars] ansible_ssh_user=root ifdef::openshift-enterprise[] -deployment_type=openshift-enterprise +openshift_deployment_type=openshift-enterprise endif::[] ifdef::openshift-origin[] -deployment_type=origin +openshift_deployment_type=origin endif::[] [nodes] @@ -1634,7 +2655,6 @@ node3.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" <1> Only include the sections that pertain to the hosts you are interested in uninstalling. <2> Only include hosts that you want to uninstall. -==== . Specify that new inventory file using the `-i` option when running the *_uninstall.yml_* playbook: @@ -1656,30 +2676,20 @@ any specified hosts. [[installer-known-issues]] == Known Issues -The following are known issues for specified installation configurations. - -*Multiple Masters* - -- On failover, it is possible for the controller manager to overcorrect, which -causes the system to run more pods than what was intended. However, this is a -transient event and the system does correct itself over time. See -https://github.com/GoogleCloudPlatform/kubernetes/issues/10030 for details. +- On failover in multiple master clusters, it is possible for the controller +manager to overcorrect, which causes the system to run more pods than what was +intended. However, this is a transient event and the system does correct itself +over time. See https://github.com/kubernetes/kubernetes/issues/10030 for +details. - On failure of the Ansible installer, you must start from a clean operating system installation. If you are using virtual machines, start from a fresh -image. If you are using bare metal machines, run the following on all hosts: -+ ----- -# yum -y remove openshift openshift-* etcd docker - -# rm -rf /etc/origin /var/lib/openshift /etc/etcd \ - /var/lib/etcd /etc/sysconfig/atomic-openshift* /etc/sysconfig/docker* \ - /root/.kube/config /etc/ansible/facts.d /usr/share/openshift ----- +image. If you are using bare metal machines, see +xref:uninstalling-advanced[Uninstalling {product-title}] for instructions. == What's Next? -Now that you have a working OpenShift instance, you can: +Now that you have a working {product-title} instance, you can: - xref:../../install_config/configuring_authentication.adoc#install-config-configuring-authentication[Configure authentication]; by default, authentication is set to @@ -1691,9 +2701,9 @@ ifdef::openshift-origin[] xref:../../install_config/configuring_authentication.adoc#AllowAllPasswordIdentityProvider[Allow All]. endif::[] -- Deploy an xref:../registry/index.adoc#install-config-registry-overview[integrated Docker registry]. -- Deploy a xref:../router/index.adoc#install-config-router-overview[router]. +- Deploy an xref:../../install_config/registry/index.adoc#install-config-registry-overview[integrated Docker registry]. +- Deploy a xref:../../install_config/router/index.adoc#install-config-router-overview[router]. ifdef::openshift-origin[] -- xref:../../install_config/imagestreams_templates.adoc#install-config-imagestreams-templates[Populate your OpenShift installation] +- xref:../../install_config/imagestreams_templates.adoc#install-config-imagestreams-templates[Populate your {product-title} installation] with a useful set of Red Hat-provided image streams and templates. endif::[] diff --git a/release_notes/ose_3_1_release_notes.adoc b/release_notes/ose_3_1_release_notes.adoc index 97f5d4863ae8..f3858f63d3aa 100644 --- a/release_notes/ose_3_1_release_notes.adoc +++ b/release_notes/ose_3_1_release_notes.adoc @@ -252,7 +252,7 @@ https://bugzilla.redhat.com/show_bug.cgi?id=1275388[BZ#1275388]:: Previously, so https://bugzilla.redhat.com/show_bug.cgi?id=1265187[BZ#1265187]:: When persistent volume claims (PVC) were created from a template, sometimes the same volume would be mounted to multiple PVCs. At the same time, the volume would show that only one PVC was being used. The issue has been fixed. https://bugzilla.redhat.com/show_bug.cgi?id=1279308[BZ#1279308]:: Previously, using a etcd storage location other than the default, as defined in the master configuration file, would result in an upgrade fail at the "generate etcd backup" stage. This issue has now been fixed. https://bugzilla.redhat.com/show_bug.cgi?id=1276599[BZ#1276599]:: Basic authentication passwords can now contain colons. -https://bugzilla.redhat.com/show_bug.cgi?id=1279744[BZ#1279744]:: Previously, giving `*EmptyDir*` volumes a different default permission setting and group ownership could affect deploying the *postgresql-92-rhel7* image. The issue has been fixed. +https://bugzilla.redhat.com/show_bug.cgi?id=1279744[BZ#1279744]:: Previously, giving `*emptyDir*` volumes a different default permission setting and group ownership could affect deploying the *postgresql-92-rhel7* image. The issue has been fixed. https://bugzilla.redhat.com/show_bug.cgi?id=1276395[BZ#1276395]:: Previously, an error could occur when trying to perform an HA install using Ansible, due to a problem with SRC files. The issue has been fixed. https://bugzilla.redhat.com/show_bug.cgi?id=1267733[BZ#1267733]:: When installing a etcd cluster with hosts with different network interfaces, the install would fail. The issue has been fixed. https://bugzilla.redhat.com/show_bug.cgi?id=1274239[BZ#1274239]:: Previously, when changing the default project region from *infra* to *primary*, old route and registry pods are stuck in the terminating stage and could not be deleted, meaning that new route and registry pods could not be deployed. The issue has been fixed. diff --git a/release_notes/ose_3_2_release_notes.adoc b/release_notes/ose_3_2_release_notes.adoc index cb57e3ec4a69..77b53ab8d8ae 100644 --- a/release_notes/ose_3_2_release_notes.adoc +++ b/release_notes/ose_3_2_release_notes.adoc @@ -130,7 +130,7 @@ Authentication] for details. - The `SETUID` and `SETGID` capabilities have been added back to the *anyuid* SCC, which ensures that programs that start as root and then drop to a lower permission level will work by default. -- Quota support has been added for `*emptydir*`. When the quota is enabled on an +- Quota support has been added for `*emptyDir*`. When the quota is enabled on an XFS system, nodes will limit the amount of space any given project can use on a node to a fixed upper bound. The quota is tied to the `*FSGroup*` of the project. Administrators can control this value by editing the project directly diff --git a/using_images/other_images/jenkins.adoc b/using_images/other_images/jenkins.adoc index 779b0683935e..c5ddf3c7b622 100644 --- a/using_images/other_images/jenkins.adoc +++ b/using_images/other_images/jenkins.adoc @@ -339,7 +339,7 @@ are already installed. $ oc new-app jenkins-persistent ---- -.. Or an `EmptyDir` type volume (where configuration does not persist across pod restarts): +.. Or an `emptyDir` type volume (where configuration does not persist across pod restarts): ---- $ oc new-app jenkins-ephemeral ----