Gluster blog stories provide high-level spotlights on our users all over the world
|Adding multiple disks in the image above (IDE Disk 2,3,4) with virt-install (above — the view of external disks from VMM)|
I’ve recently been creating VM’s from OTHER PEOPLES VMs. This means that I’ve had to play around with external storage. Mounting external storage in VMs is important if you want to simulate real world conditions (multiple disks, multiple mounts, and different file sytems) in your virtualized environement.
The template we’ll use for installing a VM with extra disks is a VM with no extra disks. We can easily install a raw os with virt install.. Somethin like this, for example, with the big top KVM image, which you can download directly (I cp’d it into a directory I made called /VirtualMachines).
virt-install –import -n bigtop_sandbox_big1 -r 4048 –os-type=linux –disk /VirtualMachines/bigtop_hadoop-sda.raw,format=raw –vnc –noautoconsole
Before we go forward, just note that the first –disk option will be the default boot disk also, and that I set the permissions on the /VirtualMachines/ directory are wide open. Otherwise you might run into qemu issues when creating the VM.
In the above VM, there are no extra disks to write to.
Now, lets modify the above command to some extra virtual disks, created on the fly, to your vm.
They will be represented as devices (in /dev/sd*) as you might expect.
virt-install –import -n bigtop_sandbox_big1 -r 4048 –os-type=linux –disk /VirtualMachines/bigtop_hadoop-sda.raw,format=raw –vnc –noautoconsole –disk path=/var/lib/libvirt/images/bigtopsandbox1,size=5 –disk path=/var/lib/libvirt/images/bigtopsandbox2,size=5 –disk path=/var/lib/libvirt/images/bigtopsandbox3,size=5
The colory stuff above will instruct virt-install to create two 5 GB virtual disks. They will then be visible in /dev/sd[a-z] on your guest OS, once it installs.
What if the disks already exist? No problem, they will just get attached to the VM without being recreated.
Now, lets verify by ls’ing on /dev/sd*… why sd*? because thats where disks go in linux. BTW, though, in virt-install, when the virt device target is /dev/hda, it is mapped to /dev/sda. Thats just a minor detail/weird thing to note… it threw me for a loop in the beggining when I was trying to understand the guts of all this stuff. You can scroll to the bottom of this post on details.
brw-rw—-. 1 root disk 8, 0 Jul 4 20:42 /dev/sda
brw-rw—-. 1 root disk 8, 1 Jul 4 20:42 /dev/sda1
brw-rw—-. 1 root disk 8, 16 Jul 4 20:49 /dev/sdb
brw-rw—-. 1 root disk 8, 32 Jul 4 20:42 /dev/sdc
brw-rw—-. 1 root disk 8, 48 Jul 4 20:42 /dev/sdd
Now, lets write to our virtual disks in the VM…. Remember, before you can write to them, you have to mount… for example, lets mount /dev/sdb.
#Format a file system on the disk devicemkfs.ext4 /dev/sdb#Mount themount -t xfs /dev/sdb /mnt/sdbtouch /mnt/sdb/testfile
You should add this info to fstab to, so that its automounted for you:
/dev/sdb /mnt/sdb ext4 defaultfs 0 0
And finally, now prove to yourself once and for all that you are a KVM god by df’ing, and confirming that your disks are all visible.
[root@localhost ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
rootfs 1968512 1171960 776556 61% /
devtmpfs 1994556 152 1994404 1% /dev
tmpfs 2003024 0 2003024 0% /dev/shm
/dev/sda1 1968512 1171960 776556 61% /
devtmpfs 1994556 152 1994404 1% /dev
/dev/sdb 5160576 141304 4757128 3% /mnt/sdb
Regarding the device names and the ever fascinating hda/sda conundrum.
Okay. So when I was going through all this I found that it was tricky to know where the disk devices were. The initial solution (which sort of might be helpful, but certainly isn’t ideal), is to run this command, which will reveal all the disk blocks in the VM’s xml definition file created when virt-install runs.
virsh dumpxml bigtop_sandbox_big1 | grep –color -B 3 -A 5 “disk”
In my case, I get /dev/hda,/dev/hdb, and so on as the target disk device names.
But strangely, when we ssh into the GUEST, and run a command to view all disk devices, we see no “hda” devices:
[root@localhost etc]# fdisk -l | grep “MB,”
Disk /dev/sda: 2147 MB, 2147483648 bytes
Disk /dev/sdc: 5368 MB, 5368709120 bytes
Disk /dev/sdd: 5368 MB, 5368709120 bytes
Disk /dev/sdb: 5368 MB, 5368709120 bytes
So I went ahead and asked around as to why “hdX” is the defined device target on the hypervisor, yet “sdX” is the corresponding device in the guest, over here on lovely SO…. Well… I got two answers.
“The hypervisor can’t tell the guest where to mount a device” .
“Yeah, this is kinda how it works. HDA -> SDA, HDB -> SDB, etc…”
And eventually, this all spun down into someone posting the link to the RTFM resource for this problem on the libvirt site. The long and short of it turned out that the “target” feild in “disk” declarations is, indeed, just a hint.
devattribute indicates the “logical” device name. The actual device name specified is not guaranteed to map to the device name in the guest OS. Treat it as a device ordering hint.”
So, rest easy, and just realize that your hdX devices map to /dev/sdX in your guest OS, and move on with your day :). In the end, the post really was just supposed to show you how to mount multiple storage devices, and I wasn’t planning on deep diving into the semantics of device names – which I’m no expert on… But sometimes, it just seems like in order to understand something at a high level of abstraction you, ironically, have to first go through the pain of playing with some of the underyling complexities.
2020 has not been a year we would have been able to predict. With a worldwide pandemic and lives thrown out of gear, as we head into 2021, we are thankful that our community and project continued to receive new developers, users and make small gains. For that and a...
It has been a while since we provided an update to the Gluster community. Across the world various nations, states and localities have put together sets of guidelines around shelter-in-place and quarantine. We request our community members to stay safe, to care for their loved ones, to continue to be...
The initial rounds of conversation around the planning of content for release 8 has helped the project identify one key thing – the need to stagger out features and enhancements over multiple releases. Thus, while release 8 is unlikely to be feature heavy as previous releases, it will be the...