<div dir="ltr">I would rather stay with DD and create an image for backup.<div><br></div><div>The snapshot is already active, but can not be mounted:</div><div><br></div><div><div>$ sudo gluster snapshot info snap1</div><div>Snapshot : snap1</div><div>Snap UUID : 2788e974-514b-4337-b41a-54b9cb5b0699</div><div>Created : 2015-09-02 14:03:59</div><div>Snap Volumes:</div><div><br></div><div>Snap Volume Name : 2d828e6282964e0e89616b297130aa1b</div><div>Origin Volume name : vol1</div><div>Snaps taken for vol1 : 2</div><div>Snaps available for vol1 : 254</div><div>Status : Started</div></div><div><br></div><div>
<p>$ sudo mount gs1:/snaps/snap1/vol1 /mnt/external/</p><p>mount.nfs: mounting gs1:/snaps/snap1/vol1 failed, reason given by server: No such file or directory</p><p>$ sudo mount gs1:/snaps/2d828e6282964e0e89616b297130aa1b/vol1 /mnt/external/<br></p><p>mount.nfs: mounting gs1:/snaps/2d828e6282964e0e89616b297130aa1b/vol1 failed, reason given by server: No such file or directory</p><p>Also when taking snapshots with DD I need to know the name of the mount source.</p><p>Gluster automounts the files with the following mount source:</p><p>
</p><p class="">
</p><p class=""><span class="">/dev/mapper/gluster-2d828e6282964e0e89616b297130aa1b_0</span></p><p class="">If I know that name, I could do the DD:</p><p class="">$ sudo dd if=/dev/mapper/gluster-d0c254908dca451d8f566be77437c538_0 | gzip > snap1.gz</p><p class="">41738240+0 records in<br>41738240+0 records out<br>21369978880 bytes (21 GB) copied, 401.596 s, 53.2 MB/s</p><p class="">How would I know the name of the mount source to mount?</p><p class="">I want to run this by cron. E.g.</p><p class="">
</p><p class="MsoNormal" style="margin-bottom:12pt;line-height:14.65pt;background:white"><span style="font-size:11pt;font-family:Menlo;color:black">gluster snapshot create snap1 vol1 no-timestamp <br></span><span style="line-height:normal">dd if=/dev/mapper/gluster-snap1 | gzip > snap1.gz<br>ftp ...</span></p><p class="MsoNormal" style="margin-bottom:12pt;line-height:14.65pt;background:white"><span style="line-height:normal">Thank you in advance for sheding some light on doing backups from glusterfs.</span></p><p class="MsoNormal" style="margin-bottom:12pt;line-height:14.65pt;background:white"><span style="font-size:11pt;font-family:Menlo;color:black"><br></span></p><p class=""><span class=""><br></span></p><p><br></p></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">2015-09-03 13:21 GMT+02:00 Rajesh Joseph <span dir="ltr"><<a href="mailto:rjoseph@redhat.com" target="_blank">rjoseph@redhat.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br>
<br>
----- Original Message -----<br>
> From: "Merlin Morgenstern" <<a href="mailto:merlin.morgenstern@gmail.com">merlin.morgenstern@gmail.com</a>><br>
> To: "Rajesh Joseph" <<a href="mailto:rjoseph@redhat.com">rjoseph@redhat.com</a>><br>
> Cc: "gluster-users" <<a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a>><br>
</span><span class="">> Sent: Wednesday, September 2, 2015 8:27:40 PM<br>
> Subject: Re: [Gluster-users] gluster volume snap shot - basic questions<br>
><br>
</span><span class="">> Just double checked for the location of the snapshot files.<br>
><br>
> Documentations says they should be here:<br>
><br>
> A directory named snap will be created under the vol directory<br>
> (..../glusterd/vols/<volname>/snap). Under which each created snap<br>
> will be a self contained directory with meta files, and snap volumes<br>
><br>
> <a href="http://www.gluster.org/community/documentation/index.php/Features/snapshot" rel="noreferrer" target="_blank">http://www.gluster.org/community/documentation/index.php/Features/snapshot</a><br>
><br>
<br>
</span>The above link is little out-dated. Checkout the following links<br>
<a href="http://www.gluster.org/community/documentation/index.php/Features/Gluster_Volume_Snapshot" rel="noreferrer" target="_blank">http://www.gluster.org/community/documentation/index.php/Features/Gluster_Volume_Snapshot</a><br>
<a href="http://rajesh-joseph.blogspot.in/p/gluster-volume-snapshot-howto.html" rel="noreferrer" target="_blank">http://rajesh-joseph.blogspot.in/p/gluster-volume-snapshot-howto.html</a><br>
<span class=""><br>
> Unfortunatelly they are not, they are in /var/lib/glusterd/snaps/<br>
><br>
> Each snap has a directory with volumes inside.<br>
><br>
<br>
</span>Snapshot of a Gluster volume creates point-in-time copy of the entire volume. That's why<br>
the snapshot of a Gluster volume is kind of a Gluster volume. As any regular Gluster Volume<br>
snapshot also has data and volume config files associated with it.<br>
<br>
The config files for snapshot is stored at /var/lib/glusterd/snaps/<snapname> directory.<br>
The actual data (bricks) is stored in individual LVMs which are mounted at /var/run/gluster/snaps/<snap-volname>/brick<no>/<br>
<span class=""><br>
<br>
> If I want to use the dd command, which volume should I backup?<br>
><br>
<br>
</span>I think this would be very primitive way of backup and might take lot of time doing actual backup.<br>
Consider using some open-source backup solutions, e.g. Bareos, etc.<br>
<br>
If you want to use dd then I suggest to mount the snapshot volume and then do dd on the mount point. Else<br>
you need to take backup all bricks separately and handle replicas as well.<br>
<span class=""><br>
> ls:<br>
><br>
> node1:/data/mysql/data$ ll<br>
> /var/lib/glusterd/snaps/snap1/2d828e6282964e0e89616b297130aa1b/<br>
><br>
> total 56<br>
><br>
</span>> drwxr-xr-x 3 root root 4096 Sep 2 16:04 *.*/<br>
><br>
> drwxr-xr-x 4 root root 4096 Sep 2 16:04 *..*/<br>
<span class="">><br>
> -rw------- 1 root root 4559 Sep 2 16:03<br>
> 2d828e6282964e0e89616b297130aa1b.gs1.run-gluster-snaps-2d828e6282964e0e89616b297130aa1b-brick1-brick1.vol<br>
><br>
> -rw------- 1 root root 4559 Sep 2 16:03<br>
> 2d828e6282964e0e89616b297130aa1b.gs2.run-gluster-snaps-2d828e6282964e0e89616b297130aa1b-brick2-brick1.vol<br>
><br>
> -rw------- 1 root root 2250 Sep 2 16:03<br>
> 2d828e6282964e0e89616b297130aa1b-rebalance.vol<br>
><br>
> -rw------- 1 root root 2250 Sep 2 16:03<br>
> 2d828e6282964e0e89616b297130aa1b.tcp-fuse.vol<br>
><br>
</span>> drwxr-xr-x 2 root root 4096 Sep 2 16:04 *bricks*/<br>
<div><div class="h5">><br>
> -rw------- 1 root root 16 Sep 2 16:04 cksum<br>
><br>
> -rw------- 1 root root 587 Sep 2 16:04 info<br>
><br>
> -rw------- 1 root root 93 Sep 2 16:04 <a href="http://node_state.info" rel="noreferrer" target="_blank">node_state.info</a><br>
><br>
> -rw------- 1 root root 0 Sep 2 16:03 quota.conf<br>
><br>
> -rw------- 1 root root 13 Sep 2 16:04 <a href="http://snapd.info" rel="noreferrer" target="_blank">snapd.info</a><br>
><br>
> -rw------- 1 root root 2478 Sep 2 16:03<br>
> trusted-2d828e6282964e0e89616b297130aa1b.tcp-fuse.vol<br>
><br>
><br>
><br>
> 2015-09-02 16:31 GMT+02:00 Merlin Morgenstern <<a href="mailto:merlin.morgenstern@gmail.com">merlin.morgenstern@gmail.com</a>><br>
> :<br>
><br>
> > So what would be the fastest possible way to make a backup to one single<br>
> > fileof the entire file system? Would this be probably dd?<br>
> ><br>
> > e.g.:<br>
> > sudo umount /run/gluster/snaps/7cb4b2c8f8a64ceaba62bc4ca6cd76b2/brick1<br>
> ><br>
> > sudo dd if=/dev/mapper/gluster-506cb09085b2428e9daca8ac0857c2c9_0 | gzip<br>
> > > snap01.gz<br>
> ><br>
> > That seems to work, but how could I possibly know the snapshot name? I<br>
> > took this info here from df -h since the snapshot can not be found under<br>
> > /snaps/snapshot_name<br>
> ><br>
> > I also tried to run the command you mentioned:<br>
> ><br>
> > > to mount snapshot volume:<br>
> > > mount -t glusterfs <hostname>:/snaps/<snap-name>/<origin-volname><br>
> > /<mount_point><br>
> ><br>
> > This did not work. There seems not to be any folder called /snaps/ as when<br>
> > I press tab I get suggestion for vol1 but nothing else.<br>
> ><br>
> > Here is the mount log:<br>
> ><br>
> > E [MSGID: 114058] [client-handshake.c:1524:client_query_portmap_cbk]<br>
> > 0-vol1-client-0: failed to get the port number for remote subvolume. Please<br>
> > run 'gluster volume status' on server to see if brick process is running.<br>
<br>
</div></div>By default snapshots are in deactivated state. You must activate them before mounting.<br>
Use the following command to do so.<br>
<br>
gluster snapshot activate <snapname><br>
<br>
or use the following config command to activate the snapshot by default.<br>
gluster snapshot config activate-on-create enable<br>
<br>
After the above command all the newly created snapshot will be activated by default.<br>
<div class="HOEnZb"><div class="h5"><br>
> ><br>
> > Thank you in advance for any help<br>
> ><br>
> ><br>
> ><br>
> > 2015-09-02 14:11 GMT+02:00 Rajesh Joseph <<a href="mailto:rjoseph@redhat.com">rjoseph@redhat.com</a>>:<br>
> ><br>
> >><br>
> >><br>
> >> ----- Original Message -----<br>
> >> > From: "Merlin Morgenstern" <<a href="mailto:merlin.morgenstern@gmail.com">merlin.morgenstern@gmail.com</a>><br>
> >> > To: "Rajesh Joseph" <<a href="mailto:rjoseph@redhat.com">rjoseph@redhat.com</a>><br>
> >> > Cc: "gluster-users" <<a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a>><br>
> >> > Sent: Wednesday, September 2, 2015 11:53:05 AM<br>
> >> > Subject: Re: [Gluster-users] gluster volume snap shot - basic questions<br>
> >> ><br>
> >> > Thank you Rjesh for your help. I have a thinly provisioned LVM now<br>
> >> running<br>
> >> > and can create snapshots on a real device, surviving boot.<br>
> >> ><br>
> >> > There are 2 other questions rising up now.<br>
> >> ><br>
> >> > 1. I have a LV with 20G, data is 7G. How is it possible, that I could<br>
> >> make<br>
> >> > 3 snapshots, each 7G?<br>
> >> ><br>
> >> > /dev/mapper/gluster-thinv1 20G 7.0G 12G<br>
> >> > 38% /bricks/brick1<br>
> >> ><br>
> >> > /dev/mapper/gluster-7cb4b2c8f8a64ceaba62bc4ca6cd76b2_0 20G 7.0G 12G<br>
> >> > 38% /run/gluster/snaps/7cb4b2c8f8a64ceaba62bc4ca6cd76b2/brick1<br>
> >> ><br>
> >> > /dev/mapper/gluster-506cb09085b2428e9daca8ac0857c2c9_0 20G 7.0G 12G<br>
> >> > 38% /run/gluster/snaps/506cb09085b2428e9daca8ac0857c2c9/brick1<br>
> >> ><br>
> >> > /dev/mapper/gluster-fbee900c1cc7407f9527f98206e6566d_0 20G 7.0G 12G<br>
> >> > 38% /run/gluster/snaps/fbee900c1cc7407f9527f98206e6566d/brick1<br>
> >> ><br>
> >> > /dev/mapper/gluster-d0c254908dca451d8f566be77437c538_0 20G 7.0G 12G<br>
> >> > 38% /run/gluster/snaps/d0c254908dca451d8f566be77437c538/brick1<br>
> >> ><br>
> >> ><br>
> >><br>
> >> These snapshots are copy-on-write (COW) therefore they hardly consume any<br>
> >> space.<br>
> >> As your main volume change the space consumption of the snapshots also<br>
> >> grow.<br>
> >> Check the "lvs" command to see the actual snapshot space consumption.<br>
> >><br>
> >> You can get more detailed information if you search for thinly<br>
> >> provisioned LVM and snapshots.<br>
> >><br>
> >><br>
> >> > 2. The name of the snapshot folder is the UUID, My plan is to do a "tar<br>
> >> cf"<br>
> >> > on the snapshot and even incremental tars. Therefore I would need the<br>
> >> name<br>
> >> > of the folder. How could I pass that name to my bash script in order to<br>
> >> > make a backup of the last snap?<br>
> >> ><br>
> >><br>
> >> Instead of taking per brick backup you can think of taking backup of the<br>
> >> entire snapshot<br>
> >> volume. You can mount the snapshot volume and perform the backup. Use the<br>
> >> following command<br>
> >> to mount snapshot volume:<br>
> >> mount -t glusterfs <hostname>:/snaps/<snap-name>/<origin-volname><br>
> >> /<mount_point><br>
> >><br>
> >> or else if you want to find the name of the snapshot volume (UUID) then<br>
> >> run the<br>
> >> following command<br>
> >> gluster snapshot info<br>
> >><br>
> >> ><br>
> >> > 3. A tar process will take hours on the million files I have. I<br>
> >> understand<br>
> >> > this is a snapshot, is there a way to backup a "single" snapshot file<br>
> >> > instead?<br>
> >><br>
> >> Snapshot is maintained in the underlying file-system and I see no way of<br>
> >> transferring<br>
> >> it to another system.<br>
> >><br>
> >> ><br>
> >> > Thank you in advance for sheding some light on this topic<br>
> >> ><br>
> >> > 2015-09-02 7:59 GMT+02:00 Rajesh Joseph <<a href="mailto:rjoseph@redhat.com">rjoseph@redhat.com</a>>:<br>
> >> ><br>
> >> > ><br>
> >> > ><br>
> >> > > ----- Original Message -----<br>
> >> > > > From: "Merlin Morgenstern" <<a href="mailto:merlin.morgenstern@gmail.com">merlin.morgenstern@gmail.com</a>><br>
> >> > > > To: "gluster-users" <<a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a>><br>
> >> > > > Sent: Tuesday, September 1, 2015 3:15:43 PM<br>
> >> > > > Subject: [Gluster-users] gluster volume snap shot - basic questions<br>
> >> > > ><br>
> >> > > > Hi everybody,<br>
> >> > > ><br>
> >> > > > I am looking into the snap shot tool, following this tutorial:<br>
> >> > > > <a href="http://blog.gluster.org/2014/10/gluster-volume-snapshot-howto/" rel="noreferrer" target="_blank">http://blog.gluster.org/2014/10/gluster-volume-snapshot-howto/</a><br>
> >> > > ><br>
> >> > > > While having successfully created the LVM, gluster volume and one<br>
> >> > > snapshot,<br>
> >> > > > there are some questions arrising where I was hoping to find some<br>
> >> > > guidence<br>
> >> > > > here:<br>
> >> > > ><br>
> >> > > > 1. From a working setup as in the example I rebooted and everything<br>
> >> was<br>
> >> > > gone.<br>
> >> > > > How can I make this setup persistent, so the gluster share is up and<br>
> >> > > running<br>
> >> > > > after boot.<br>
> >> > > ><br>
> >> > ><br>
> >> > > What do you mean by "everything was gone"? Are you using loop back<br>
> >> device<br>
> >> > > as disks?<br>
> >> > > If yes then this is expected. Loop back device mapping is gone after<br>
> >> > > machine restart.<br>
> >> > > You should test with real disk or lvm partition.<br>
> >> > ><br>
> >> > > > 2. I understand that the snaps are under /var/run/gluster/snaps/<br>
> >> and I<br>
> >> > > found<br>
> >> > > > them there. Is it save to simply copy them to another server for<br>
> >> backup?<br>
> >> > > My<br>
> >> > > > goal is to create a backup each day and transfer the snaps to an<br>
> >> > > FTP-Server<br>
> >> > > > in order to be able to recover from a broken machine.<br>
> >> > > ><br>
> >> > ><br>
> >> > > Yes, snap of individual bricks are mounted at<br>
> >> /var/run/gluster/snaps/. I<br>
> >> > > am assuming<br>
> >> > > that you mean copy of data hosted on the snap brick when you say copy<br>
> >> the<br>
> >> > > snap.<br>
> >> > > Are you planning to use some backup software or to run rsync on each<br>
> >> brick?<br>
> >> > ><br>
> >> > > > 3. Do I really need LVM to use this feature? Currently my setup<br>
> >> works on<br>
> >> > > the<br>
> >> > > > native system. As I understand the tuturial I would need to move<br>
> >> that to<br>
> >> > > a<br>
> >> > > > LV, right?<br>
> >> > > ><br>
> >> > ><br>
> >> > > Yes, you need LVM and to be precise thinly provisioned LVM for<br>
> >> snapshot to<br>
> >> > > work.<br>
> >> > ><br>
> >> > > > Thank you in advance on any help!<br>
> >> > > ><br>
> >> > > > _______________________________________________<br>
> >> > > > Gluster-users mailing list<br>
> >> > > > <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> >> > > > <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
> >> > ><br>
> >> ><br>
> >><br>
> ><br>
> ><br>
><br>
</div></div></blockquote></div><br></div>