3
oc new-app eboraas/apache - will get an image from dockerhub and create image-stream, dc, rc, pod and the service
4
- Actually, it will fail complaining that "Current security policy prevents your containers from being run as the root user".
5
- It is necessary to either modify docker image to run as standard user (better) or allow or original docker image to run as root.
6
oc adm policy add-scc-to-user anyuid -z default - add 'anyuid' capability (ability to run as any user, also root) to service account (default)
7
- The 'default' service account is used to run containers within our current project. Alternatively, we can create another privileged service account
8
and instruct openshift to use it to run the pod.
9
- The result can be verified by calling 'oc edit scc anyuid' and refering to the listed 'users'.
10
- Afterwards, the apache service has to be redeployed.
11
oc deploy apache --latest - redeploys teh latest version of apache and apply changes to security context
12
- Apache pods will be running after this point. Now we need to configure a route.
13
oc expose service/apache --hostname=apache.apps.suren.me - makes apache service available under the specified host name (works with HTTP and any TLS service supporting SNI, i.e. HTTPS)
15
- 'new-app' will actually create several objects, all of which can be listed with 'oc get all'
16
is/apache - image stream
17
dc/apache - deployment config
18
rc/apache-# - replication controller (along with current deployment revision)
19
po/apache-#-<id> - actually running pod (along with current deployment revision and a replication-id for each running replica)
21
routes/apache - the route is created afterwards with expose command
25
- Adding persistent storage using shared mount-point on all nodes
26
oc volumes dc/apache --add -t hostPath --path='/mnt/openshift/apache/' -m /etc/apache2 - change deployment configuration and volume to it
27
- It will, actually, immideately redeploy pod which would fail due to invalid 'scc' being selected (see security). After adding the 'hostPath' volume
28
our pod will match 'restricted' scc instead of 'anyuid' because 'hostPath' volumes are not allowed under 'anyuid' (just check 'oc get scc')
29
- Therefore, we also need to add 'hostmount-anyuid' capability to the service account running pod
30
oc adm policy add-scc-to-user hostmount-anyuid -z default - add 'hostmount-anyuid' capability to service account (default)
31
- Though, this is not enough. It seems by default pod does not announce that it needs root. As explained in the security section, from the 'scc' matching
32
container requirements, the 'scc' with the highest priority and lowest possible permissions within the priority level is selected. The selected 'scc' can
33
be checked with: 'oc get <pod> -o yaml | grep scc'.
34
- One solution is to increase priority of 'hostmount-anyuid'
35
? How we can indicate what root is required?
36
oc edit scc hostmount-anyuid - set high value for priority and save
37
- Again we need to redeploy the pod
38
oc deploy apache --latest - redeploys the latest version of apache and selects appropriate scc
40
- This is, however, not the best approach as it container will not verify if the Gluster/NFS is actually mounter or not. Instead a GlusterFS service can be registered with
41
endpoints pointing to all (main?) gluster nodes. Then, the persistent volume and persistent volume claims are made. The claim can be mounted with
42
oc volumes dc/apache --add -t pvc --claim-name=gfs-openshift -m /mnt/openshift
43
The firewall should be configured on all gluster nodes to allow connections to the gluster service. Otherwise the volume will be mounted read-only as default firewall rules
44
allow access to local gluster node, but not remotes (so, the read will successed for local replica, but there will be no quorum for writes):
45
firewall-cmd --permanent --zone=public --add-service=glusterfs
47
Further problems can be investigated by checking gluster logs which can be found on the node running pod with
48
ps xa | grep glusterfs | grep apache-4-8ol0q-glusterfs
49
Potentially, the issues with SELinux are possible. The labels should match. Check documentation.
50
ls -lZ /mnt/openshift/ - check SELinux labels on mount
53
- We can get default configuration out of container using 'rsync' command
54
oc rsync <pod>:/etc/apache2 /mnt/openshift/ - copies specified folder in running pod to the local folder
56
- After the apache confiugration is changed, we need restart containers. This can be achieved to scalling it to 0 and, then, scalling back to 1.
57
oc scale --replicas 0 rc/apache-#
58
oc scale --replicas 1 rc/apache-#
60
- Later volume can be removed with
61
oc volumes dc/apache --add --remove --name=<vol_name> - <vol_name> can be found by listing vilumes
65
- Deploy from template
66
oc process --parameters -n openshift mysql-persistent - List parameters of the mysql template
67
oc process -n openshift -v MYSQL_USER=adei -v MYSQL_PASSWORD=adei -v MYSQL_ROOT_PASSWORD=ipepdv -v MYSQL_DATABASE=adei -v VOLUME_CAPACITY=1Gi -v MYSQL_VERSION=5.7 mysql-persistent | oc create -f -
68
- Check the actually configure user and password
70
- Expose on non-standard port (find the port with 'oc describe svc/mysql | grep NodePort')
71
oc expose dc/mysql --type NodePort --name mysql-ingress
72
or alternatively using IP (EXTERNAL-IP in 'oc get svc' output).
73
oc expose dc/mysql --type LoadBalancer --name mysql-ingress
74
The traffic to these IP should be routed to one of the OpenShift nodes)
75
route add -net 172.46.0.0 netmask 255.255.0.0 gw 192.168.26.1