Dynamic provisioning of Gluster volumes with the Kubernetes module of Oracle Linux Cloud Native Environment

section 0Before You Begin

This 30-minute tutorial shows you how to configure Oracle Linux components to dynamically create storage volumes as Kubernetes users request them.

Background

The Kubernetes module for the Oracle Linux Cloud Native Environment (OLCNE) includes multiple storage class provisioners. In this example we will create an integrated system where Kubernetes workers provide persistent Gluster based storage in addition to their normal execution duties.

Using the Kubernetes Glusterfs plugin and Heketi we can then dynamically provision Gluster volumes for use as Kubernetes PersistentVolumes and automatically destroy them when the PersistentVolumeClaims are deleted.

What Do You Need?

There are existing examples for installing the Kubernetes module so to avoid repetition we'll work from the Vagrant build available at https://github.com/oracle/vagrant-boxes/

It will create four VMs for us, one master and three worker nodes before deploying the Kubernetes module. The Oracle Linux VirtualBox template includes an unused second virtual disk (/dev/sdb) which we will use to store our Gluster data. This tutorial will require 12GB of RAM and 15GB of disk space.

Environment

This lab involves multiple VMs and you will need to perform different steps on different VMs. The easiest way to do this is through the vagrant command

vagrant ssh <hostname>

Once connected you can return to your desktop with a standard logout or exit command.

For example

  1. [user@demo lab]$ vagrant ssh master1
    Last login: Tue Aug 20 23:56:58 2019 from 10.0.2.2
    [vagrant@master1 ~]$ hostname
    master1.vagrant.vm
    [vagrant@master1 ~]$ exit
    logout
    Connection to 127.0.0.1 closed.
    [user@demo lab]$ vagrant ssh worker1
    Last login: Tue Aug 20 05:25:49 2019 from 10.0.2.2
    [vagrant@worker1 ~]$ hostname
    worker1.vagrant.vm
    [vagrant@worker1 ~]$ logout
    Connection to 127.0.0.1 closed.

Start the Lab environment

You will first download and start the VMs we will be using in this lab environment. This is simplified through the use of Vagrant.

  1. Clone the Git repository and change to the OLCNE directory
    [user@demo lab]$ git clone https://github.com/oracle/vagrant-boxes
    [user@demo lab]$ cd vagrant-boxes/OLCNE
  2. Install the Vagrant plugins

    To avoid directly modifying the Vagrantfile we override its parameters with vagrant-env

    [user@demo lab]$ vagrant plugin install vagrant-env

    To automatically configure the /etc/hosts file of our VMs we use the vagrant-hosts plugin

    [user@demo lab]$ vagrant plugin install vagrant-hosts
  3. Define the environment file .env.local

    Increase the memory available to each VM and raise the worker count from two to three. This is done as Gluster requires three replicas by default.

    [user@demo lab]$ cp .env .env.local
    [user@demo lab]$ vi .env.local

    Uncomment and update the line that defines MEMORY to be

    MEMORY = 3072

    Uncomment and update the line that defines NB_WORKERS to be

    NB_WORKERS = 3
  4. Start the cluster
    [user@demo lab]$ vagrant up

    The deployment of the Kubernetes module will take some time as additional software is downloaded and installed but you will soon see:

    master1: ===== Your Oracle Linux Cloud Native Environment is operational. =====
    master1: NAME                 STATUS   ROLES    AGE     VERSION
    master1: master1.vagrant.vm   Ready    master   5m2s    v1.14.8+1.0.2.el7
    master1: worker1.vagrant.vm   Ready    <none>   3m37s   v1.14.8+1.0.2.el7
    master1: worker2.vagrant.vm   Ready    <none>   4m21s   v1.14.8+1.0.2.el7
    master1: worker3.vagrant.vm   Ready    <none>   3m59s   v1.14.8+1.0.2.el7

    Note the Ready status of each Kubernetes node, if you have issues with the vagrant up then please open a GitHub issue


Configure the workers

For each of the workers we will install and configure the Gluster service.

Open three terminal windows and use the vagrant ssh hostname command to connect to the three workers: worker1, worker2 and worker3.

  1. Enable the RPM repositories
    [vagrant@worker1 ~]$ sudo yum install -y oracle-gluster-release-el7
  2. Install the RPMs
    [vagrant@worker1 ~]$ sudo yum install -y glusterfs-server glusterfs-client
  3. Allow the required port through the firewall
    [vagrant@worker1 ~]$ sudo firewall-cmd --add-service=glusterfs --permanent
    [vagrant@worker1 ~]$ sudo firewall-cmd --reload
  4. Setup the Gluster environment with TLS to encrypt management traffic between Gluster nodes

    As we're using the OLCNE vagrant box there are already x.509 certificates created so we shall re-use them

    [vagrant@worker1 ~]$ sudo cp /etc/olcne/pki/production/ca.cert /etc/ssl/glusterfs.ca
    [vagrant@worker1 ~]$ sudo cp /etc/olcne/pki/production/node.key /etc/ssl/glusterfs.key
    [vagrant@worker1 ~]$ sudo cp /etc/olcne/pki/production/node.cert /etc/ssl/glusterfs.pem
    [vagrant@worker1 ~]$ sudo touch /var/lib/glusterd/secure-access

    Note this encrypts only management traffic between Gluster nodes, see the summary and Appendix A for more information.

  5. Enable the service
    [vagrant@worker1 ~]$ sudo systemctl enable --now glusterd.service

Remember to execute these steps on all three worker VMs. After this you can close the three connections to the worker VMs.


Configure the master

With Gluster installed on the workers we will install and configure Heketi which will use the Gluster nodes to provision storage.

We perform this step on the VM master1, connecting with the vagrant ssh hostname command.

  1. Enable the RPM repositories
    [vagrant@master1~]$ sudo yum install -y oracle-gluster-release-el7
  2. Install the RPMs
    [vagrant@master1~]$ sudo yum install -y heketi heketi-client
  3. Allow the required port through the firewall
    [vagrant@master1~]$ sudo firewall-cmd --add-port=8080/tcp --permanent
    [vagrant@master1~]$ sudo firewall-cmd --reload
  4. Create a ssh authentication key for Heketi to use when communicating with worker nodes
    [vagrant@master1~]$ sudo ssh-keygen -f /etc/heketi/heketi_key -t rsa -N ''
    [vagrant@master1~]$ sudo ssh-copy-id -i /etc/heketi/heketi_key.pub worker1
    [vagrant@master1~]$ sudo ssh-copy-id -i /etc/heketi/heketi_key.pub worker2
    [vagrant@master1~]$ sudo ssh-copy-id -i /etc/heketi/heketi_key.pub worker3
    [vagrant@master1~]$ sudo chown heketi:heketi /etc/heketi/heketi_key*
  5. Configure the /etc/heketi/heketi.json file.
    [vagrant@master1~]$ sudo vi /etc/heketi/heketi.json

    Update the use_auth section to true

      "use_auth": true,  

    Define a passphrase for the admin and user accounts

        "admin": {
          "key": "Admin Password"
        },
        "user": {
          "key": "User Password"
        }
    

    Change the glusterfs executor from mock to ssh

      "glusterfs": {
        ...
        "executor": "ssh",

    Define the sshexec properties

        "sshexec": {
          "keyfile": "/etc/heketi/heketi_key",
          "user": "root",
          "port": "22",
          "fstab": "/etc/fstab"
        },
    
  6. Enable the service
    [vagrant@master1~]$ sudo systemctl enable --now heketi.service
  7. Validate Heketi is working
    [vagrant@master1~]$ curl localhost:8080/hello
    Hello from Heketi
  8. Create a Heketi topology file

    In this file we declare the names of hosts to use, their IP addresses and the block devices free for Heketi use

    [vagrant@master1~]$ sudo vi /etc/heketi/topology.json
    {
      "clusters": [
        {
          "nodes": [
            {
              "node": {
                "hostnames": {
                  "manage": [
                    "worker1.vagrant.vm"
                  ],
                  "storage": [
                    "192.168.99.111"
                  ]
                },
                "zone": 1
              },
              "devices": [
                "/dev/sdb"
              ]
            },
            {
              "node": {
                "hostnames": {
                  "manage": [
                    "worker2.vagrant.vm"
                  ],
                  "storage": [
                    "192.168.99.112"
                  ]
                },
                "zone": 1
              },
              "devices": [
                "/dev/sdb"
              ]
            },
            {
              "node": {
                "hostnames": {
                  "manage": [
                    "worker3.vagrant.vm"
                  ],
                  "storage": [
                    "192.168.99.113"
                  ]
                },
                "zone": 1
              },
              "devices": [
                "/dev/sdb"
              ]
            }
          ]
        }
      ]
    }
  9. Load the Heketi topology file using the username and password defined in step five
    [vagrant@master1~]$ heketi-cli --user admin --secret "Admin Password" topology load --json=/etc/heketi/topology.json
    Creating cluster ... ID: b569b6963558a97481a8d6830122866c
        Allowing file volumes on cluster.
        Allowing block volumes on cluster.
        Creating node worker1.vagrant.vm ... ID: 22905523990f4d110beeefb88f319af9
            Adding device /dev/sdb ... OK
        Creating node worker2.vagrant.vm ... ID: 04945b7cbe3d21121a99a1ade306ded0
            Adding device /dev/sdb ... OK
        Creating node worker3.vagrant.vm ... ID: 586575f7cc0cb42c6e56c573a8fc308f
          Adding device /dev/sdb ... OK
  10. List the nodes of known clusters
    [vagrant@master1~]$ heketi-cli --secret "Admin Password" --user admin node list
    Id:04945b7cbe3d21121a99a1ade306ded0
    Cluster:b569b6963558a97481a8d6830122866c
    Id:22905523990f4d110beeefb88f319af9
    Cluster:b569b6963558a97481a8d6830122866c
    Id:586575f7cc0cb42c6e56c573a8fc308f
    Cluster:b569b6963558a97481a8d6830122866c

Configure Kubernetes

With Heketi configured we must now configure Kubernetes to communicate with Heketi when volumes are requested.

We perform this step on the VM master1, connecting with the vagrant ssh hostname command.

  1. Create a Secret object to store our Heketi administrator passphrase
    [vagrant@master1~]$ kubectl create secret generic heketi-admin --type='kubernetes.io/glusterfs' --from-literal=key='Admin Password' --namespace=default
    secret/heketi-admin created
  2. Create the hyperconverged StorageClass object as the default StorageClass
    [vagrant@master1~]$ cat << EOF | kubectl apply -f -
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: hyperconverged
      annotations:
        storageclass.kubernetes.io/is-default-class: "true"
    provisioner: kubernetes.io/glusterfs
    parameters:
      resturl: "http://master1.vagrant.vm:8080"
      restauthenabled: "true"
      restuser: "admin"
      secretNamespace: "default"
      secretName: "heketi-admin"
    EOF
    storageclass.storage.k8s.io/hyperconverged created
  3. Create some example PersistantVolumeClaims
    [vagrant@master1~]$ for x in {0..5}; do
    cat << EOF | kubectl apply -f -
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
     name: gluster-pvc-${x}
    spec:
     accessModes:
      - ReadWriteMany
     resources:
      requests:
        storage: 1Gi
    EOF
    done
    persistentvolumeclaim/gluster-pvc-0 created
    persistentvolumeclaim/gluster-pvc-1 created
    persistentvolumeclaim/gluster-pvc-2 created
    persistentvolumeclaim/gluster-pvc-3 created
    persistentvolumeclaim/gluster-pvc-4 created
    persistentvolumeclaim/gluster-pvc-5 created 
  4. Verify the PersistentVolumeClaims are dynamically filled by Gluster volumes

    It may take a few moments for these to be assigned

    [vagrant@master1 ~]$ kubectl get pvc
    NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGE
    gluster-pvc-0   Bound    pvc-a68d41fa-328d-11ea-ab06-08002720edbc   1Gi        RWX            hyperconverged   43s
    gluster-pvc-1   Bound    pvc-a6a1f522-328d-11ea-ab06-08002720edbc   1Gi        RWX            hyperconverged   43s
    gluster-pvc-2   Bound    pvc-a6b73d0c-328d-11ea-ab06-08002720edbc   1Gi        RWX            hyperconverged   43s
    gluster-pvc-3   Bound    pvc-a6cf6b06-328d-11ea-ab06-08002720edbc   1Gi        RWX            hyperconverged   43s
    gluster-pvc-4   Bound    pvc-a6ec2027-328d-11ea-ab06-08002720edbc   1Gi        RWX            hyperconverged   42s
    gluster-pvc-5   Bound    pvc-a707ffff-328d-11ea-ab06-08002720edbc   1Gi        RWX            hyperconverged   42s 
  5. Create an example Deployment that uses a PersistentVolumeClaim defined above
    [vagrant@master1~]$ cat << EOF | kubectl apply -f -
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        run: demo-nginx
      name: demo-nginx
    spec:
      replicas: 1
      selector:
        matchLabels:
          run: demo-nginx
      template:
        metadata:
          labels:
            run: demo-nginx
        spec:
          containers:
          - image: nginx
            name: demo-nginx
            ports:
            - containerPort: 80
            volumeMounts:
            - name: demo-nginx-pvc
              mountPath: /usr/share/nginx/html
          volumes:
          - name: demo-nginx-pvc
            persistentVolumeClaim:
              claimName: gluster-pvc-1
    EOF
    deployment.apps/demo-nginx created 
  6. Ensure that our example Gluster backed nginx pod has started successfully
    [vagrant@master1 ~]$ kubectl get pod -l run=demo-nginx 
    NAME                          READY   STATUS    RESTARTS   AGE
    demo-nginx-75fd7f5594-bsqf4   1/1     Running   0          88s
  7. Verify the volume used is Glusterfs

    Change the command to the name of the pod identified in the previous step

    [vagrant@master1 ~]$ kubectl exec demo-nginx-75fd7f5594-bsqf4 -ti -- mount -t fuse.glusterfs
    192.168.99.111:vol_6b0062f9e1cc0e5592120679eb249ae9 on /usr/share/nginx/html type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

At this point our Kubernetes environment is creating Gluster volumes when a PersistantVolumeClaim is created and deleting them when the PersistantVolumeClaim is deleted.


Summary

When a PersistantVolumeClaim is made the Kubernetes API Server (running on master1) will request a volume from Heketi (running on master1). Heketi will create a Gluster volume from the three Gluster nodes (worker1, worker2 and worker3) and respond back to the API server with volume details. When directed to start the pod, the worker will mount the Gluster filesystem and present it to a pod.

Note that Gluster volumes created by Heketi will not have I/O encryption enabled, the above configuration only enables encryption of management traffic.


Appendix A: Enabling TLS in Heketi

When deploying in production you may want to encrypt communications between the Kubernetes API server and Heketi. In section four we configured Heketi and the StorageClass to use HTTP, we now update this to HTTPS.

We perform this step on the VM master1, connecting with the vagrant ssh hostname command.

  1. Update the /etc/heketi/heketi.json file.
    [vagrant@master1~]$ sudo vi /etc/heketi/heketi.json

    Insert the following after the port definition

    "_enable_tls_comment": "Enable TLS in Heketi Server",
    "enable_tls": true,
    
    "_cert_file_comment": "Path to a valid certificate file",
    "cert_file": "/etc/olcne/pki/production/node.cert",
    
    "_key_file_comment": "Path to a valid private key file",
    "key_file": "/etc/olcne/pki/production/node.key",

    For example

    {
      "_port_comment": "Heketi Server Port Number",
      "port": "8080",
    
      "_enable_tls_comment": "Enable TLS in Heketi Server",
      "enable_tls": true,
    
      "_cert_file_comment": "Path to a valid certificate file",
      "cert_file": "/etc/olcne/pki/production/node.cert",
    
      "_key_file_comment": "Path to a valid private key file",
      "key_file": "/etc/olcne/pki/production/node.key",
    
      "_use_auth": "Enable JWT authorization. Please enable for deployment",
      "use_auth": true,
      ...
    
  2. Restart the service
    [vagrant@master1~]$ sudo systemctl restart heketi.service
  3. Trust the example Certificate Authority
    [vagrant@master1~]$ sudo cp /etc/olcne/pki/production/ca.cert /etc/pki/ca-trust/source/anchors/
    [vagrant@master1~]$ sudo update-ca-trust extract
    
  4. Validate HTTPS Heketi is working
    [vagrant@master1~]$ curl https://localhost:8080/hello
    Hello from Heketi
  5. (Optional) Delete an existing StorageClass object

    Note that StorageClass parameters cannot be updated, if you already have a StorageClass hyperconverged you must delete it before continuing

    [vagrant@master1~]$ kubectl delete storageclass hyperconverged
    storageclass.storage.k8s.io "hyperconverged" deleted
  6. Create the StorageClass object with a https resturl
    [vagrant@master1~]$ cat << EOF | kubectl apply -f -
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: hyperconverged
      annotations:
        storageclass.kubernetes.io/is-default-class: "true"
    provisioner: kubernetes.io/glusterfs
    parameters:
      resturl: "https://localhost:8080"
      restauthenabled: "true"
      restuser: "admin"
      secretNamespace: "default"
      secretName: "heketi-admin"
    EOF
    storageclass.storage.k8s.io/hyperconverged created
  7. To use further heketi-cli commands you must first declare the HTTPS url
    [vagrant@master1 ~]$ export HEKETI_CLI_SERVER=https://localhost:8080

Heketi communications are now encrypted.


Appendix B: Example Gluster output

  1. (Optional) Define the Heketi server URL

    If you have completed Appendix A: Enabling TLS in Heketi then you must declare the updated URL

    [vagrant@master1 ~]$ export HEKETI_CLI_SERVER=https://localhost:8080
  2. List volumes
    [vagrant@master1 ~]$ heketi-cli --user admin --secret "Admin Password" volume list
    Id:6f8138811d8b7e39d145234cccd2e6b7    Cluster:b569b6963558a97481a8d6830122866c    Name:vol_6f8138811d8b7e39d145234cccd2e6b7
    Id:82d135b634f32c57ee2973ad14655e3f    Cluster:b569b6963558a97481a8d6830122866c    Name:vol_82d135b634f32c57ee2973ad14655e3f
    Id:bcc202bc33ae7a9276ff276547be3914    Cluster:b569b6963558a97481a8d6830122866c    Name:vol_bcc202bc33ae7a9276ff276547be3914
    Id:e34c5ca4ae7d5414b2918ec9294fb5c7    Cluster:b569b6963558a97481a8d6830122866c    Name:vol_e34c5ca4ae7d5414b2918ec9294fb5c7
    Id:ea43fee36910d74a8a5e9979798ee861    Cluster:b569b6963558a97481a8d6830122866c    Name:vol_ea43fee36910d74a8a5e9979798ee861
    Id:fbe0f5ba7eac242723c47951f1c8887d    Cluster:b569b6963558a97481a8d6830122866c    Name:vol_fbe0f5ba7eac242723c47951f1c8887d
  3. Show volume info
    [vagrant@master1 ~]$ heketi-cli --user admin --secret "Admin Password" volume info 6f8138811d8b7e39d145234cccd2e6b7
    Name: vol_6f8138811d8b7e39d145234cccd2e6b7
    Size: 1
    Volume Id: 6f8138811d8b7e39d145234cccd2e6b7
    Cluster Id: b569b6963558a97481a8d6830122866c
    Mount: 192.168.99.112:vol_6f8138811d8b7e39d145234cccd2e6b7
    Mount Options: backup-volfile-servers=192.168.99.111,192.168.99.113
    Block: false
    Free Size: 0
    Reserved Size: 0
    Block Hosting Restriction: (none)
    Block Volumes: []
    Durability Type: replicate
    Distributed+Replica: 3
    Snapshot Factor: 1.00
    
  4. Show the state of the Gluster volume from a workers perspective
    [vagrant@worker1 ~]$ sudo gluster volume status vol_6f8138811d8b7e39d145234cccd2e6b7
    Status of volume: vol_6f8138811d8b7e39d145234cccd2e6b7
    Gluster process                             TCP Port  RDMA Port  Online  Pid
    ------------------------------------------------------------------------------
    Brick 192.168.99.112:/var/lib/heketi/mounts
    /vg_6b72b2d7c9a34707fa3a0c2fc681ddcc/brick_
    cda566b09812ab9d758fe3cd783d80da/brick      49153     0          Y       14929
    Brick 192.168.99.113:/var/lib/heketi/mounts
    /vg_e1092aa9e86a191ba3edef46d6e69860/brick_
    56aa9db0e398d1247475558d12511b5c/brick      49154     0          Y       14209
    Brick 192.168.99.111:/var/lib/heketi/mounts
    /vg_bf6b912689f5d687acad1ab9acb5c098/brick_
    0d035224714e0ac35fef2ae957661daa/brick      49154     0          Y       23190
    Self-heal Daemon on localhost               N/A       N/A        Y       23950
    Self-heal Daemon on worker2.vagrant.vm      N/A       N/A        Y       15744
    Self-heal Daemon on 192.168.99.113          N/A       N/A        Y       15010
    
    Task Status of Volume vol_6f8138811d8b7e39d145234cccd2e6b7
    ------------------------------------------------------------------------------
    There are no active volume tasks

more informationWant to Learn More?