[Gluster-users] [GlusterFS 3.2] Information about hard link

Amar Tumballi amar at gluster.com
Thu Jun 16 10:53:26 UTC 2011


Hi Julien,

answers inline.


> I've just installed a new glusterfs architecture to replicate my storage
> tank.
> We have two servers with two disk array, nothing big… Once a day we make
> backup of our entire server park to one array.
> And I would use glusterFS to replicate my backup data to the second disk
> array.
>
> I have configured gluster for that but I have a very important question :
> Do GlusterFS support hard link on Linux system ?
> Of course we need many hard links to map data which don't change between
> two backups.
>
Ofcourse yes.


>  My second question is : Why the command* gluster volume start REP_TEST*automatically mount the volume in /etc/glusterd/mount/REP_TEST ??
>

This should not happen automatically by default. The possible cases where
you get mount points on servers are below:

* while using 'quota' feature: need one time initial mount, which crawls
through the directories to update the current usage details. It should be
unmounted automatically once the job is complete.
* when you are doing a 'rebalance': a mount point will be there during the
time of 'rebalance' (as long as the rebalance is running), and should be on
just the machine issuing the 'rebalance' and should be automatically
unmounted.
*  when you issue 'replace-brick' : to send some internal 'commands' we need
a fuse mount, and hence will be doing a mount, which will get umounted
automatically.
* while using 'geo-replication': 'path of the mount point here
'/tmp/some-uniq-string', and it doesn't get visible on your mtab.

This situation is very bad for me because I backup /etc changes all day
> long… and I don't want backups my entire storage array with /etc…
>
> I'm starting research for the second point… but if someone can give me the
> good conf file ? :)
>

change the 'option working-directory /etc/glusterd/' to 'option
working-directory /something/else' in glusterd.vol file (in
$prefix/etc/glusterfs/glusterd.vol)

That should fix it.

Regards,
Amar
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110616/1e3b9f9f/attachment.html>


More information about the Gluster-users mailing list