The All-in-One install I detailed in Up and Running with oVirt 3.3 includes everything you need to run virtual machines and get a feel for what oVirt can do, but the downside of the local storage domain type is that it limits you to that single All in One (AIO) node.
You can shift your AIO install to a shared storage configuration to invite additional nodes to the party, and oVirt has supported the usual shared storage suspects such as NFS and iSCSI since the beginning.
New in oVirt 3.3, however, is a storage domain type for GlusterFS that takes advantage of Gluster’s new libgfapi feature to boost performance compared to FUSE or NFS-based methods of accessing Gluster storage with oVirt.
With a GlusterFS data center in oVirt, you can distribute your storage resources right alongside your compute resources. As a new feature, GlusterFS domain support is rougher around the edges than more established parts of oVirt, but once you get it up and running, it’s worth the trouble.
In oVirt, each host can be part of only one data center at a time. Before we decommission our local storage domain, we have to shut down any VMs running on our host, and, we’re interested in moving them to our new Gluster storage domain, we need to ferry those machines over to our export domain.
GlusterFS Domain & RHEL/CentOS:
The new, libgfapi-based GlusterFS storage type has a couple of software prerequisites that aren’t currently available for RHEL/CentOS — the feature requires qemu 1.3 or better and libvirt 1.0.1 or better. Earlier versions of those components don’t know about the GlusterFS block device support, so while you’ll be able to configure a GlusterFS domain on one of those distros today, any attempts to launch VMs will fail.
Versions of qemu and libvirt with the needed functionality backported are in the works, and should be available soon, but for now, you’ll need Fedora 19 to use the GlusterFS domain type. For RHEL or CentOS hosts, you can still use Gluster-based storage, but you’ll need to do so with the POSIXFS storage type.
The setup procedures are very similar, so I’ll include the POSIXFS instructions below as well in case you want to pursue that route in the meantime. Once the updated packages become available, I’ll modify this howto accordingly.
SELinux, Permissive
Currently, the GlusterFS storage scenario described in this howto requires that SELinux be put in permissive mode. You can put selinux in permissive mode with the command:
sudo setenforce 0
To make the shift to permissive mode persist between reboots, edit “/etc/sysconfig/selinux” and change SELINUX=enforcing to SELINUX=permissive.
Visit the “Virtual Machines” tab in your Administrator Portal, shut down any running VMs, and click “Export,” “OK” to copy them over to your export domain.
While an export is in progress, there’ll be an hourglass icon next to your VM name. Once any VMs you wish to save have moved over, you can reclaim some space by right-clicking the VMs and hitting “Remove,” and then “OK.”
Next, detach your ISO_DOMAIN from the local storage data center by visiting the “Storage” tab, clicking on the ISO_DOMAIN, visiting the “Data Center” tab in the bottom pane, clicking “local_datacenter,” then “Maintenance,” then “Detach,” and “OK” in the following dialog. Follow these same steps to detach your EXPORT_DOMAIN as well.
Now, click the “Data Centers” tab, select the “Default” data center, and click “Edit.” In the resulting dialog box, choose “GlusterFS” in the “Type” drop down menu and click “OK.”
If you’re using RHEL/CentOS and taking the Gluster via POSIXFS storage route I referenced above, choose “POSIXFS” in the “Type” drop down menu instead.
Next, click the “Clusters” tab, select the “Default” cluster, and click “Edit.” In the resulting dialog box, click the check box next to “Enable Gluster Service” and click “OK.”
Then, visit the “Hosts” tab, select your “local_host” host, and click “Maintenance.” When the host is in maintenance mode, click “Edit,” select “Default” from the “Data Center” drop down menu, hit “OK,” and then “OK” again in the following dialog.
Install the vdsm-gluster package, start gluster, and restart vdsm:
sudo yum install vdsm-gluster
Now, edit the file “/etc/glusterfs/glusterd.vol” [bz#] to add “option rpc-auth-allow-insecure on” to the list of options under “volume management.”
As part of the virt store optimizations that oVirt applies to Gluster volumes, there’s a Gluster virt group in which oVirt places optimized volumes. The file that describes this group isn’t currently provided in a package, so we have to fetch it from Gluster’s source repository:
sudo curl https://raw.github.com/gluster/glusterfs/master/extras/group-virt.example -o /var/lib/glusterd/groups/virt [bz#]
Now, we’ll start the Gluster service and restart the vdsm service:
sudo service glusterd start sudo service vdsmd restart
sudo mkdir /var/lib/exports/data chown 36:36 /var/lib/exports/data [bz#]
Then, visit the “Volumes” tab, click “Create Volume,” and give your new volume a name. I’m calling mine “data.” Check the “Optimize for Virt Store” check box, and click the “Add Bricks” button.
In the resulting dialog box, populate “Brick Directory” with the path we created earlier, “/var/lib/exports/data” and click “Add” to add it to the bricks list. Then, click “OK” to exit the dialog, and “OK” again to return to the “Volumes” tab.
sudo gluster volume set data server.allow-insecure on
Then, visit the “Storage” tab, hit “New Domain,” give your domain a name, and populate the “Path” field with your machine’s hostname colon volume name:
mylittlepony.lab:data
If you’re using RHEL/CentOS and taking the Gluster via POSIXFS storage route I referenced above, you need to populate the “Path” field with with your machine’s hostname colon slash volume name instead. Again, this is only if you’re taking the POSIXFS route. With the GlusterFS storage type, that pesky slash [BZ] won’t prevent the domain from being created, but it’ll cause VM startup mysteriously to fail! Also, in the “VFS Type” field, you’ll need to enter “glusterfs”
Click “OK” and wait a few moments for the new storage domain to initialize. Next, click on your detached export domain, choose the “Data Center” tab in the bottom pane, click “Attach,” select “Default” data center, and click “OK.” Perform the same steps with your iso domain.
Visit the “Storage” tab, select your export domain, click “VM Import” in the lower pane, select the VM you wish to import, and click “Import.” Click “OK” on the dialog that appears next. If you didn’t remove the VM you’re importing from your local storage domain earlier, you may have to “Import as cloned.”
From here, you can experiment with different types of Gluster volumes for your data domains. For instance, if, after adding a second host to your data center, you want to replicate storage between the two hosts, you’d create a storage brick on both of your hosts, choose the replicated volume type when creating your Gluster volume, create a data domain backed by that volume, and start storing your VMs there.
You can also disable the NFS ISO and Export shares hosted from your AIO machine and re-create them on new Gluster volumes, accessed via Gluster’s built-in NFS server. If you do, make sure to disable your system’s own NFS service, as kernel NFS and Gluster NFS conflict with each other.
2020 has not been a year we would have been able to predict. With a worldwide pandemic and lives thrown out of gear, as we head into 2021, we are thankful that our community and project continued to receive new developers, users and make small gains. For that and a...
It has been a while since we provided an update to the Gluster community. Across the world various nations, states and localities have put together sets of guidelines around shelter-in-place and quarantine. We request our community members to stay safe, to care for their loved ones, to continue to be...
The initial rounds of conversation around the planning of content for release 8 has helped the project identify one key thing – the need to stagger out features and enhancements over multiple releases. Thus, while release 8 is unlikely to be feature heavy as previous releases, it will be the...