The Gluster Blog

Gluster blog stories provide high-level spotlights on our users all over the world

[Coming Soon] Dynamic Provisioning of GlusterFS volumes in Kubernetes/Openshift!!

Gluster
2016-08-24

In this context I am talking about the dynamic provisioning capability of ‘glusterfs’ plugin in Kubernetes/Openshift. I have submitted a Pull Request to Kubernetes to add this functionality for GlusterFS.

At present, there is no existing network storage provisioners in kubernetes eventhough there are cloud providers. The idea here is to make the glusterfs plugin capable of provisioning volumes on demand from kubernetes/openshift .. Cool, Isnt it ? Indeed this is a nice feature to have. That said, an OSE user request for a space for example : 20G and the glusterfs plugin takes this request and create 20G and bound that to the claim. The plugin can use any REST service, but the example patch is based on ‘heketi’.

Here is the workflow:
Start your kubernetes controller manager with highlighted options:


 ...kube controller-manager --v=3 
 --service-account-private-key-file=/tmp/kube-serviceaccount.key
 --root-ca-file=/var/run/kubernetes/apiserver.crt --enable-hostpath-provisioner=false

 --enable-network-storage-provisioner=true --storage-config=/tmp --net-provider=glusterfs
 --pvclaimbinder-sync-period=15s --cloud-provider= --master=127.0.0.1:8080

 

Create a file called `gluster.json` in `/tmp` directory. The important fields in this config file are ‘endpoint’ and ‘resturl’. The endpoint has to be defined and match the setup. The `resturl` has been filled with the rest service which can take the input and create a gluster volume in the backend. As mentioned earlier I am using `heketi` for the same.


 [hchiramm@dhcp35-111 tmp]$ cat gluster.json
 {
 "endpoint": "glusterfs-cluster",
 "resturl": "http://127.0.0.1:8081",
 "restauthenabled":false,
 "restuser":"",
 "restuserkey":""
 }
 [hchiramm@dhcp35-111 tmp]$
 

We have to define an ENDPOINT and SERVICE. Below are the example configuration files. ENDPOINT : “ip” has to be filled with your gluster trusted pool IP.


[hchiramm@dhcp35-111 ]$ cat glusterfs-endpoint.json
{
"kind": "Endpoints",
"apiVersion": "v1",
"metadata": {
"name": "glusterfs-cluster"
},
"subsets": [
{
"addresses": [
{
"ip": "10.36.4.112"
}
],
"ports": [
{
"port": 1
}
]
},
{
"addresses": [
{
"ip": "10.36.4.112"
}
],
"ports": [
{
"port": 1
}
]
}
]
}

SERVICE:
Please note that the Service Name is matching with ENDPOINT name


[hchiramm@dhcp35-111 ]$ cat gluster-service.json
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "glusterfs-cluster"
},
"spec": {
"ports": [
{"port": 1}
]
}
}
[hchiramm@dhcp35-111 ]$

Finally we have a Persistent Volume Claim file as shown below:
NOTE: The size of the volume is mentioned as ’20G’:


[hchiramm@dhcp35-111 ]$ cat gluster-pvc.json
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "glusterc",
"annotations": {
"volume.alpha.kubernetes.io/storage-class": "glusterfs"
}
},
"spec": {
"accessModes": [
"ReadOnlyMany"
],
"resources": {
"requests": {
"storage": "20Gi"
}
}
}
}
[hchiramm@dhcp35-111 ]$

Let's start defining the endpoint, service and PVC.


[hchiramm@dhcp35-111 ]$ ./kubectl create -f glusterfs-endpoint.json
endpoints "glusterfs-cluster" created
[hchiramm@dhcp35-111 ]$ ./kubectl create -f gluster-service.json
service "glusterfs-cluster" created
[hchiramm@dhcp35-111 ]$ ./kubectl get ep,service
NAME ENDPOINTS AGE
ep/glusterfs-cluster 10.36.6.105:1 14s
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/glusterfs-cluster 10.0.0.10 1/TCP 9s
svc/kubernetes 10.0.0.1 443/TCP 13m
[hchiramm@dhcp35-111 ]$ ./kubectl get pv,pvc
[hchiramm@dhcp35-111 ]$

Now, let's request a claim!

[hchiramm@dhcp35-111 ]$ ./kubectl create -f glusterfs-pvc.json
persistentvolumeclaim "glusterc" created
[hchiramm@dhcp35-111 ]$ ./kubectl get pv,pvc
NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
pv/pvc-39ebcdc5-442b-11e6-8dfa-54ee7551fd0c  20Gi ROX  Bound  default/glusterc 2s
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
pvc/glusterc Bound pvc-39ebcdc5-442b-11e6-8dfa-54ee7551fd0c 0 3s
[hchiramm@dhcp35-111 ]$

Awesome! Based on the request it created a PV and BOUND to the PVClaim!!


[hchiramm@dhcp35-111 ]$ ./kubectl describe pv pvc-39ebcdc5-442b-11e6-8dfa-54ee7551fd0c
Name: pvc-39ebcdc5-442b-11e6-8dfa-54ee7551fd0c
Labels:
Status: Bound
Claim: default/glusterc
Reclaim Policy: Delete
Access Modes: ROX
Capacity: 20Gi
Message:
Source:
Type: Glusterfs (a Glusterfs mount on the host that shares a pod's lifetime)
EndpointsName: glusterfs-cluster
 Path: vol_038b56756f4e3ab4b07a87494097941c
ReadOnly: false
No events.
[hchiramm@dhcp35-111 ]$
 

Verify the volume exist in backend:

 [root@ ~]# heketi-cli volume list |grep 038b56756f4e3ab4b07a87494097941c
 038b56756f4e3ab4b07a87494097941c
 [root@ ~]#

Let's delete the PV claim --


[hchiramm@dhcp35-111 ]$ ./kubectl delete pvc glusterc
persistentvolumeclaim "glusterc" deleted
[hchiramm@dhcp35-111 ]$ ./kubectl get pv,pvc
[hchiramm@dhcp35-111 ]$

It got deleted!

Verify it from backend:


 [root@ ~]# heketi-cli volume list |grep 038b56756f4e3ab4b07a87494097941c
 [root@ ~]# 

We can use the Volume for app pods by referring the claim name.
Hope this is a nice feature to have !

Please let me know if you have any comments/suggestions.

Also, the patch – https://github.com/kubernetes/kubernetes/pull/30888 is undergoing review in upstream as mentioned earlier and hopefully it will make it soon to the kubernetes release. I will provide an update here as soon as its available in upstream.

BLOG

  • 13 Nov 2019
    Announcing Gluster 7.0

    The Gluster community is pleased to announce the release of 7.0, our latest release. This is a major release that includes a range of code improvements and stability fixes along with a few features as noted below. A selection of the key features and bugs addressed are documented in this...

    Read more
  • 15 Oct 2019
    Gluster and CentOS Stream

    Progress cannot be made without change. As technologists, we recognize this every day. Most of the time, these changes are iterative: progresssive additions of features to projects like Gluster. Sometimes those changes are small, and sometimes not. And that’s, of course, just talking about our project. But one of the...

    Read more
  • 26 Apr 2019
    Gluster Monthly Newsletter, April 2...

    Upcoming Community Happy Hour at Red Hat Summit! Tue, May 7, 2019, 6:30 PM – 7:30 PM EDT https://cephandglusterhappyhour_rhsummit.eventbrite.com has all the details. Gluster 7 Roadmap Discussion kicked off for our 7 roadmap on the mailing lists, see [Gluster-users] GlusterFS v7.0 (and v8.0) roadmap discussion https://lists.gluster.org/pipermail/gluster-users/2019-March/036139.html for more details. Community...

    Read more