[Gluster-users] A few queries on self-healing and AFR (glusterfs 3.4.2)

A Ghoshal a.ghoshal at tcs.com
Mon Feb 2 18:30:15 UTC 2015


Hello,

I have a replica-2 volume in which I store a large number of files that 
are updated frequently (critical log files, etc). My files are generally 
stable, but one thing that does worry me from time to time is that files 
show up on one of the bricks in the output of gluster v <volname> heal 
info. These entries disappear on their own after a while (I am guessing 
when cluster.heal-timeout expires and another heal by the self-heal daemon 
is triggered). For certain files, this could be a bit of a bother - in 
terms of fault tolerance...

I was wondering if there is a way I could force AFR to return 
write-completion to the application only _after_ the data is written to 
both replicas successfully (kind of, like, atomic writes) - even if it 
were at the cost of performance. This way I could ensure that my bricks 
shall always be in sync. 

The other thing I could possibly do is reduce my cluster.heal-timeout (it 
is 600 currently). Is it a bad idea to set it to something as small as 
say, 60 seconds for volumes where redundancy is a prime concern? 

One question, though - is heal through self-heal daemon accomplished using 
separate threads for each replicated volume, or is it a single thread for 
every volume? The reason I ask is I have a large number of replicated 
file-systems on each volume (17, to be precise) but I do have a reasonably 
powerful multicore processor array and large RAM and top indicates the 
load on the system resources is quite moderate.

Thanks,
Anirban
=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150203/90fc7906/attachment.html>


More information about the Gluster-users mailing list