The Gluster Blog

Gluster blog stories provide high-level spotlights on our users all over the world

[Coming Soon] Dynamic Provisioning of GlusterFS volumes in Kubernetes/Openshift!!

Gluster
2016-08-24

In this context I am talking about the dynamic provisioning capability of ‘glusterfs’ plugin in Kubernetes/Openshift. I have submitted a Pull Request to Kubernetes to add this functionality for GlusterFS.

At present, there is no existing network storage provisioners in kubernetes eventhough there are cloud providers. The idea here is to make the glusterfs plugin capable of provisioning volumes on demand from kubernetes/openshift .. Cool, Isnt it ? Indeed this is a nice feature to have. That said, an OSE user request for a space for example : 20G and the glusterfs plugin takes this request and create 20G and bound that to the claim. The plugin can use any REST service, but the example patch is based on ‘heketi’.

Here is the workflow:
Start your kubernetes controller manager with highlighted options:


 ...kube controller-manager --v=3 
 --service-account-private-key-file=/tmp/kube-serviceaccount.key
 --root-ca-file=/var/run/kubernetes/apiserver.crt --enable-hostpath-provisioner=false

 --enable-network-storage-provisioner=true --storage-config=/tmp --net-provider=glusterfs
 --pvclaimbinder-sync-period=15s --cloud-provider= --master=127.0.0.1:8080

 

Create a file called `gluster.json` in `/tmp` directory. The important fields in this config file are ‘endpoint’ and ‘resturl’. The endpoint has to be defined and match the setup. The `resturl` has been filled with the rest service which can take the input and create a gluster volume in the backend. As mentioned earlier I am using `heketi` for the same.


 [hchiramm@dhcp35-111 tmp]$ cat gluster.json
 {
 "endpoint": "glusterfs-cluster",
 "resturl": "http://127.0.0.1:8081",
 "restauthenabled":false,
 "restuser":"",
 "restuserkey":""
 }
 [hchiramm@dhcp35-111 tmp]$
 

We have to define an ENDPOINT and SERVICE. Below are the example configuration files. ENDPOINT : “ip” has to be filled with your gluster trusted pool IP.


[hchiramm@dhcp35-111 ]$ cat glusterfs-endpoint.json
{
"kind": "Endpoints",
"apiVersion": "v1",
"metadata": {
"name": "glusterfs-cluster"
},
"subsets": [
{
"addresses": [
{
"ip": "10.36.4.112"
}
],
"ports": [
{
"port": 1
}
]
},
{
"addresses": [
{
"ip": "10.36.4.112"
}
],
"ports": [
{
"port": 1
}
]
}
]
}

SERVICE:
Please note that the Service Name is matching with ENDPOINT name


[hchiramm@dhcp35-111 ]$ cat gluster-service.json
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "glusterfs-cluster"
},
"spec": {
"ports": [
{"port": 1}
]
}
}
[hchiramm@dhcp35-111 ]$

Finally we have a Persistent Volume Claim file as shown below:
NOTE: The size of the volume is mentioned as ’20G’:


[hchiramm@dhcp35-111 ]$ cat gluster-pvc.json
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "glusterc",
"annotations": {
"volume.alpha.kubernetes.io/storage-class": "glusterfs"
}
},
"spec": {
"accessModes": [
"ReadOnlyMany"
],
"resources": {
"requests": {
"storage": "20Gi"
}
}
}
}
[hchiramm@dhcp35-111 ]$

Let's start defining the endpoint, service and PVC.


[hchiramm@dhcp35-111 ]$ ./kubectl create -f glusterfs-endpoint.json
endpoints "glusterfs-cluster" created
[hchiramm@dhcp35-111 ]$ ./kubectl create -f gluster-service.json
service "glusterfs-cluster" created
[hchiramm@dhcp35-111 ]$ ./kubectl get ep,service
NAME ENDPOINTS AGE
ep/glusterfs-cluster 10.36.6.105:1 14s
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/glusterfs-cluster 10.0.0.10 1/TCP 9s
svc/kubernetes 10.0.0.1 443/TCP 13m
[hchiramm@dhcp35-111 ]$ ./kubectl get pv,pvc
[hchiramm@dhcp35-111 ]$

Now, let's request a claim!

[hchiramm@dhcp35-111 ]$ ./kubectl create -f glusterfs-pvc.json
persistentvolumeclaim "glusterc" created
[hchiramm@dhcp35-111 ]$ ./kubectl get pv,pvc
NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
pv/pvc-39ebcdc5-442b-11e6-8dfa-54ee7551fd0c  20Gi ROX  Bound  default/glusterc 2s
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
pvc/glusterc Bound pvc-39ebcdc5-442b-11e6-8dfa-54ee7551fd0c 0 3s
[hchiramm@dhcp35-111 ]$

Awesome! Based on the request it created a PV and BOUND to the PVClaim!!


[hchiramm@dhcp35-111 ]$ ./kubectl describe pv pvc-39ebcdc5-442b-11e6-8dfa-54ee7551fd0c
Name: pvc-39ebcdc5-442b-11e6-8dfa-54ee7551fd0c
Labels:
Status: Bound
Claim: default/glusterc
Reclaim Policy: Delete
Access Modes: ROX
Capacity: 20Gi
Message:
Source:
Type: Glusterfs (a Glusterfs mount on the host that shares a pod's lifetime)
EndpointsName: glusterfs-cluster
 Path: vol_038b56756f4e3ab4b07a87494097941c
ReadOnly: false
No events.
[hchiramm@dhcp35-111 ]$
 

Verify the volume exist in backend:

 [root@ ~]# heketi-cli volume list |grep 038b56756f4e3ab4b07a87494097941c
 038b56756f4e3ab4b07a87494097941c
 [root@ ~]#

Let's delete the PV claim --


[hchiramm@dhcp35-111 ]$ ./kubectl delete pvc glusterc
persistentvolumeclaim "glusterc" deleted
[hchiramm@dhcp35-111 ]$ ./kubectl get pv,pvc
[hchiramm@dhcp35-111 ]$

It got deleted!

Verify it from backend:


 [root@ ~]# heketi-cli volume list |grep 038b56756f4e3ab4b07a87494097941c
 [root@ ~]# 

We can use the Volume for app pods by referring the claim name.
Hope this is a nice feature to have !

Please let me know if you have any comments/suggestions.

Also, the patch – https://github.com/kubernetes/kubernetes/pull/30888 is undergoing review in upstream as mentioned earlier and hopefully it will make it soon to the kubernetes release. I will provide an update here as soon as its available in upstream.

BLOG

  • 06 Dec 2020
    Looking back at 2020 – with g...

    2020 has not been a year we would have been able to predict. With a worldwide pandemic and lives thrown out of gear, as we head into 2021, we are thankful that our community and project continued to receive new developers, users and make small gains. For that and a...

    Read more
  • 27 Apr 2020
    Update from the team

    It has been a while since we provided an update to the Gluster community. Across the world various nations, states and localities have put together sets of guidelines around shelter-in-place and quarantine. We request our community members to stay safe, to care for their loved ones, to continue to be...

    Read more
  • 03 Feb 2020
    Building a longer term focus for Gl...

    The initial rounds of conversation around the planning of content for release 8 has helped the project identify one key thing – the need to stagger out features and enhancements over multiple releases. Thus, while release 8 is unlikely to be feature heavy as previous releases, it will be the...

    Read more