Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ available on top of Kubernetes and Docker.
* [Node.js echo Sample](https://github.com/openshift/nodejs-ex) highlights the simple workflow from creating project, new app from GitHub, building, deploying, running and updating.
* [Project Quotas and Resource Limits](./project-quota) demonstrates how quota and resource limits can be applied to resources in an OpenShift project.
* [Replicated Zookeper Template](./zookeeper) provides a template for an OpenShift service that exposes a simple set of primitives that distributed applications can build upon to implement higher level services for synchronization, configuration maintenance, and groups and naming.
* [Storage Examples](./storage-examples) provides a high level tutorial and templates for local and persistent storage on OpenShift using simple nginx applications.
* [Database Templates](./db-templates) provide templates for ephemeral and persistent storage on OpenShift using MongoDB, MySQL, and PostgreSQL.
* [Clustered Etcd Template](./etcd) provides a template for setting up a clustered instance of the [Etcd](https://github.com/coreos/etcd) key-value store as a service on OpenShift.
* [Configurable Git Server](./gitserver) sets up a serivce capable of automatic mirroring of Git repositories, intended for use within a container or Kubernetes pod.
10 changes: 10 additions & 0 deletions examples/storage-examples/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# OpenShift Container Storage Examples [WIP]

OpenShift Applications/Containers/Pods have the ability to use Persistent Local and Distributed Storage, below are some examples that will explore some of these scenarios:

* [Local](./local-storage-examples)
* [GlusterFS Storage Examples](./gluster-examples)
* [Ceph Storage Examples](./ceph-examples)
* TBD - Cinder
*

54 changes: 54 additions & 0 deletions examples/storage-examples/ceph-examples/ENV.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
OSE Environment
===============

* 2 RHEL-7 VMs for running the openshift enterprise (OSE) master and node hosts
* A working ceph cluster, which can be a bare metal cluster, one or more VMs, one or more containers, or a very simple all-in-one container.

The RHEL-7 hosts running the OSE master and OSE nodes should have the following services enabled and running:
* selinux (*setenforce 1*)
* iptables (*systemctl start iptables*)
* firewalld (*systemctl start firewalld*) Note, if you cannot start firewalld due to the service being masked, you can do a *systemctl unmask firewalld* and then restart it
* all OSE nodes (master and workers) and the ceph host need to be running docker. Currently docker version 1.8 has storage setup issues, so see below on how to upgrade or downgrade the version of docker on your VMs.

### Docker:
On each OSE node, make sure the docker version is 1.6 or 1.7 -- not 1.8. Docker versions later than 1.8 may work fine but there is currently a storage related issue with docker 1.8. The examples here have been tested with docker 1.6 and 1.7.

```
$ docker --version
Docker version 1.6.0, build 350a636/1.6.0
```

#### Downgrading Docker:
There seems to be a docker 1.8 problem with storage setup where the docker-metapool is created too small. So, on docker version 1.8 consider downgrading to 1.7 or 1.6.

```
$ yum --showduplicates list | grep ^docker

#if you see docker 1.6 or 1.7 then...
$ yum install -y docker-1.6 #or docker-1.7
```

If the above docker downgrade fails, reporting "Error: Nothing to do", then first attempt a yum clean:

```
$ yum clean all
# redo yum install from above...
```

If the downgrade still fails then the docker target version rpm can be downloaded directly from docker.com. As of the time of this writing this link worked:

```
https://get.docker.com/rpm/1.7.1/fedora-21/RPMS/x86_64/docker-engine-1.7.1-1.fc21.x86_64.rpm
```

#### Upgrading Docker:
If docker is lower than 1.6 then:

```
yum install -y docker-1.6 #or docker-1.7
```


### Other Installations:
1. [OSE installation](OSE.md)
2. [MySQL installation](MYSQL.md)
124 changes: 124 additions & 0 deletions examples/storage-examples/ceph-examples/MYSQL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,124 @@
Setting Up and Validating Containerized MySQL
=============================================

First, OSE's Security Context Constraints (SSC) need to be defined such that the *seLinuxContext* and *runAsUser* values are set to "RunAsAny". Selinux is still enabled/enforcing on the OSE-master and OSE-nodes, as decribed in the [OSE environment readme](ENV.md).

The mysql example uses the official mysql image found [here](https://hub.docker.com/_/mysql/). First, pull down the image on *each* OSE-nodes:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should use the openshift provided mysql image instead, which doesn't require any security changes to work....
https://github.com/openshift/mysql


```
#on *each* OSE-node:
$ docker pull mysql
```

Next, test to ensure that mysql can be run in a container:

```
$ docker run --name mysql -e MYSQL_ROOT_PASSWORD=foopass -d mysql
```

Note: to re-run the above first remove the archived container so that the container name, "mysql", can be reused:

```
$ docker ps -a
$ docker stop <mysql-container-ID>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can use "mysql" as <mysql-container-ID>, as you set the container name with docker run --name mysql

$ docker rm <mysql-container-ID>

#NOTE: to remove all containers:
$ docker rm $(docker ps -a)
```

Shell into the mysql container, run mysql -p and create a simple "us_states" database:

```
$ docker exec -it <mysql-container-ID> bash
bash# mysql -p
mysql> show databases;
+---------------------+
| Database |
+---------------------+
| information_schema |
| #mysql50#lost+found |
| mysql |
| performance_schema |
+---------------------+
4 rows in set (0.12 sec)

# create a simple database for us states:
mysql> CREATE DATABASE us_states;
Query OK, 1 row affected (0.03 sec)

mysql> USE us_states;
Database changed

mysql> CREATE TABLE states (id INT NOT NULL PRIMARY KEY AUTO_INCREMENT, state CHAR(25), population INT(9));
Query OK, 0 rows affected (1.93 sec)

mysql> INSERT INTO states (id, state, population) VALUES (NULL, 'Alabama', '4822023');
Query OK, 1 row affected (0.17 sec)

mysql> SELECT * FROM states;
+----+---------+------------+
| id | state | population |
+----+---------+------------+
| 1 | Alabama | 4822023 |
+----+---------+------------+
1 row in set (0.00 sec)

mysql> quit
bash# exit
```

Note: *-p* causes mysql to prompt for the user's password, even if the password has been provided via the MYSQL_ROOT_PASSWORD env variable. In some cases, as seen here, *-p* is required for the mysql command to succeed. Omitting *-p* (even when the root password is specified in the environment), can cause the error below:

```
#On the host running the mysql container we see the password is provided:
$ docker inspect 9501299ac215 | grep PASSWORD
"MYSQL_ROOT_PASSWORD=foopass",

#inside the mysql container we get an error unless we use -p and supply the password:
root@mysql:/# mysql #no -p
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: NO)
```

Delete the mysql container so that it can be re-created from a pod later:

```
$ docker stop <mysql-container-ID>
$ docker rm <mysql-container-ID>
```

### Security:
When using the ceph-rbd storage plugin, if the mysql container is not privileged then it will fail with this error:

```
chown: cannot read directory `/var/lib/mysql/': Permission denied
```

The above error is found by looking at the docker logs on the target OSE-node:

```
$ docker ps -a #need -a since the container start fails
$ docker logs <mysql-id>
```

The "quick" solution is to set the mysql pod to be privileged, which is done with this yaml fragment:

```
...
spec:
containers:
- image: mysql
...
securityContext:
capabilities: {}
privileged: true
...
```

Note that selinux is still enabled/enforcing.

```
$ getenforce
Enforcing
```

180 changes: 180 additions & 0 deletions examples/storage-examples/ceph-examples/OSE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,180 @@
Setup a Simple OpenShift Cluster Supporting Ceph
================================================

### Environment:
The enviromnent used for all of the ceph examples is described [here](ENV.md). It is assumed that ceph is already up and running, either on bare metal, in a VM, or containerized.

### Setting up OSE:
The following OpenShift documents are pertinent to setting up OSE:
* https://docs.openshift.com/enterprise/3.0/admin_guide/install/prerequisites.html -- Prerequisites
* https://docs.openshift.org/latest/getting_started/administrators.html#running-in-a-docker-container -- Getting Started for Administrator
* https://docs.openshift.com/enterprise/3.0/admin_guide/install/quick_install.html -- **Quick Installation**
* https://docs.openshift.com/enterprise/3.0/admin_guide/configuring_authentication.html -- Configuring Authentication
* https://docs.openshift.com/enterprise/3.0/admin_guide/install/advanced_install.html -- Advanced Installation

The examples in here use "Method 1: Running the Installation Utility From the Internet" described in the [Quick Installation](https://docs.openshift.com/enterprise/3.0/admin_guide/install/quick_install.html) Guide above.

The following checks can be made to ensure that OSE is installed correctly and is running:
* is the OSE master server is running? On the OSE-master host, execute *systemctl status openshift-master*. Use *systemctl restart openshift-master* to start the master services on the OSE-master.
* are all OSE worker-nodes are running? On each OSE-node execute *systemctl status openshift-node* and, to restart the services, use *systemctl restart openshift-node*, done on each OSE-node.
* is the OSE Web Console accessible? Login to the OpenShift Console via the GUI at https://ose-master-host-name-or-ip:8443/console
* is the *oc* command available? On the OSE-master server login to OSE via the command line using *oc login -u admin*:
```
$ oc login -u admin
Password:
Using project "default"

$ oc get projects
NAME DISPLAY NAME STATUS
default Active
openshift Active
openshift-infra Active
```

### Ceph on each OSE-node:
Each schedulable OSE-node needs the ceph-common library installed, and for now, due to a current ceph packaging bug, also needs full ceph installed.

Note: in order to install the full ceph, each OSE node may need certain ceph repos enabled.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is no longer needed with the version of kubernetes that we bundle.


```
$ subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-rhceph-1.3-installer-rpms \
--enable=rhel-7-server-rhceph-1.3-mon-rpms --enable=rhel-7-server-rhceph-1.3-osd-rpms
```

Now install ceph on schedulable OSE-node:

```
$ yum install -y ceph-common

#and due to the ceph packaging bug where ceph-rbdnamed() is missing
$ yum install -y ceph
```

### Ceph Secret:
The ceph-rbd storage plugin uses a ceph secret for authorization. This is a short yaml file which resides on the OSE-master host but gets its value from the ceph monitor host.

```
#on a ceph monitor server:
$ ceph auth get-key client.admin
AQDva7JVEuVJBBAAc8e1ZBWhqUB9K/zNZdOHoQ==

$ echo "AQDva7JVEuVJBBAAc8e1ZBWhqUB9K/zNZdOHoQ=="| base64
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

echo -n

QVFEdmE3SlZFdVZKQkJBQWM4ZTFaQldocVVCOUsvek5aZE9Ib1E9PQo=
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

update this secret using echo -n

# copy the above output
```

Back on the OSE-master node, edit the [ceph-secret file](ceph-secret.yaml) pasting the base64 value above. Then *oc create* the ceph-secret object:

```
$ oc create -f ceph-secret.yaml
secrets/ceph-secret

$ oc get secret
NAME TYPE DATA
ceph-secret Opaque 1
...
```

### Security:
OSE Security Context Constraints (SCC) are described in this [OSE Authorization Guide](https://docs.openshift.com/enterprise/3.0/architecture/additional_concepts/authorization.html#security-context-constraints). The *privileged* and *restricted* SCCs are added as defaults by OSE and need to be modified in order for mysql and ceph to have sufficient privileges. See also [General OSE Authorization](https://docs.openshift.com/enterprise/3.0/architecture/additional_concepts/authorization.html).

After logging-in to OSE as the *admin* user, edit the two SCC files, as shown below:

```
$ oc login -u admin
$ oc edit scc privilege
$ oc edit scc restricted
#change "MustRunAsRange" to "RunAsAny"

$ oc get scc
NAME PRIV CAPS HOSTDIR SELINUX RUNASUSER
privileged true [] true RunAsAny RunAsAny
restricted false [] false RunAsAny RunAsAny
```

### MySQL:
For each OSE-node see the [setting up mysql doc](MYSQL.md).

### Verification/Validation:

On the OSE master host:

```
$ systemctl status openshift-master -l
openshift-master.service - OpenShift Master
Loaded: loaded (/usr/lib/systemd/system/openshift-master.service; enabled)
Active: active (running) since Thu 2015-09-03 16:35:56 EDT; 8h ago
Docs: https://github.com/openshift/origin
Main PID: 49702 (openshift)
CGroup: /system.slice/openshift-master.service
└─49702 /usr/bin/openshift start master --config=/etc/openshift/master/master-config.yaml --loglevel=4
...
DeploymentConfigs for trigger on ImageStream openshift/mysql
Sep 04 00:46:02 rhel7-ose-1 openshift-master[49702]: I0904 00:46:02.803183 49702 controller.go:38] Detecting changed images for DeploymentConfig default/docker-registry:2
...
Sep 04 00:46:03 rhel7-ose-1 openshift-master[49702]: I0904 00:46:03.001061 49702 controller.go:85] Ignoring DeploymentConfig change for default/docker-registry:2 (latestVersion=2); same as Deployment default/docker-registry-2
```

And, still on the OSE-master:

```
$ oc get nodes
NAME LABELS STATUS
192.168.122.179 kubernetes.io/hostname=192.168.122.179 Ready,SchedulingDisabled
192.168.122.254 kubernetes.io/hostname=192.168.122.254 Ready

```

On *each* OSE schedulable node:

```
$ rpm -qa|grep ceph
ceph-common-0.94.1-16.el7cp.x86_64
ceph-0.94.1-16.el7cp.x86_64
```

```
$ systemctl status openshift-node -l
openshift-node.service - OpenShift Node
Loaded: loaded (/usr/lib/systemd/system/openshift-node.service; enabled)
Drop-In: /usr/lib/systemd/system/openshift-node.service.d
└─openshift-sdn-ovs.conf
Active: active (running) since Tue 2015-09-01 18:58:27 EDT; 2 days ago
Docs: https://github.com/openshift/origin
Main PID: 94526 (openshift)
CGroup: /system.slice/openshift-node.service
└─94526 /usr/bin/openshift start node --config=/etc/openshift/node/node-config.yaml --loglevel=4


Sep 04 00:48:09 rhel7-ose-2 openshift-node[94526]: I0904 00:48:09.198143 94526 manager.go:1388] Pod infra container looks good, keep it "mysql_default"
Sep 04 00:48:09 rhel7-ose-2 openshift-node[94526]: I0904 00:48:09.198195 94526 manager.go:1411] pod "mysql_default" container "mysql" exists as 77f4af567e3dd3b10656ad5ee38a39600174a87c519d6a23735e96cf0ee4208a
Sep 04 00:48:09 rhel7-ose-2 openshift-node[94526]: I0904 00:48:09.198214 94526 prober.go:180] Readiness probe for "mysql_default:mysql" succeeded
Sep 04 00:48:09 rhel7-ose-2 openshift-node[94526]: I0904 00:48:09.198224 94526 manager.go:1442] probe success: "mysql"
Sep 04 00:48:09 rhel7-ose-2 openshift-node[94526]: I0904 00:48:09.198239 94526 manager.go:1515] Got container changes for pod "mysql_default": {StartInfraContainer:false InfraContainerId:dca749fa3530d552a643a836051d4b00ef4e3a69c69ebc8ede059848b3b27569 ContainersToStart:map[] ContainersToKeep:map[dca749fa3530d552a643a836051d4b00ef4e3a69c69ebc8ede059848b3b27569:-1 77f4af567e3dd3b10656ad5ee38a39600174a87c519d6a23735e96cf0ee4208a:0]}
Sep 04 00:48:09 rhel7-ose-2 openshift-node[94526]: I0904 00:48:09.198268 94526 kubelet.go:2245] Generating status for "mysql_default"
...
Sep 04 00:48:09 rhel7-ose-2 openshift-node[94526]: I0904 00:48:09.202366 94526 status_manager.go:129] Ignoring same status for pod "mysql_default", status: {Phase:Running Conditions:[{Type:Ready Status:True}] Message: Reason: HostIP:192.168.122.254 PodIP:10.1.0.41 StartTime:2015-09-03 19:10:07.598461546 -0400 EDT ContainerStatuses:[{Name:mysql State:{Waiting:<nil> Running:0xc213e6d0c0 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:true RestartCount:0 Image:mysql ImageID:docker://7eee2d462c8f6ffacfb908cc930559e21778f60afdb2d7e9cf0f3025274d7ea8 ContainerID:docker://77f4af567e3dd3b10656ad5ee38a39600174a87c519d6a23735e96cf0ee4208a}]}
```

And some docker checks on *each* OSE node:

```
$ docker ps #make sure docker is running

#if docker is not running then start it:
$ systemctl start docker
$ systemctl enable docker
$ systemctl status docker -l
```

### Log Files
*journalctl* and *systemctl status* are the main ways to view OSE log files. The *systemctl status* command is shown above. Here are some *journalctl* examples:

```
$ journalctl -xe -u openshift-master

#and on the OSE node:
$ journalctl -xe -u openshift-node
```

It's often necessary to scroll right (right arrow) to see pertinent info.
Loading