<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Hi all,<br>
I hope someone can help me.<br>
<br>
Adding a previously removed brick to the gluster volume leaves the
gluster mount empty when running ls. <br>
<br>
Steps to reproduce <br>
Create a gluster volume with two bricks <br>
<br>
On brick 1: <br>
1. mkdir -p /data/brick/gv0 <br>
2. gluster volume create gv0 replica 2 192.168.0.2:/data/brick/gv0
192.168.0.3:/data/brick/gv0 force (after brick 2 step 2) <br>
3. gluster volume start gv0 <br>
4. mkdir gluster <br>
5. mount -t glusterfs 192.168.0.2:/gv0 gluster <br>
6. Populate the newly created mount point with some files <br>
7. ls -la gluster <- note list of files <br>
8. ls -la gluster <- verify that the list of files is the same as
in previous step (after brick 2 step 6) <br>
9. ls -la gluster <- note that all files are gone (after brick 2
step 8) <br>
10. ls -la /data/brick/gv0/ <- note that the backing store of
brick 1 is still intact and no files or gfids appear to have been
lost <br>
<br>
On brick 2: <br>
1. mkdir -p /data/brick/gv0 <br>
2. gluster peer probe 192.168.0.2 <br>
3. mkdir gluster <br>
4. mount -t glusterfs 192.168.0.2:/gv0 gluster (after brick 1 step
3) <br>
5. gluster volume remove-brick gv0 replica 1
192.168.0.3:/data/brick/gv0 force (after brick 1 step 7) <br>
6. rm -rf /data/brick/gv0/ <br>
7. gluster volume add-brick gv0 replica 2
192.168.0.3:/data/brick/gv0 force <br>
8. <b>ls -la gluster <- note that all files are gone </b><br>
9. ls -la /data/brick/gv0/ <- note that the backing store of
brick 2 is empty <br>
<br>
Result <br>
At this point the folder "gluster" is normally completely empty on
both bricks. If the order of brick 1 step 9 and brick 2 step 8 is
reversed and you wait for brick 1 step 9 to complete the problem is
usually not seen . <br>
<br>
Additional info <br>
Ways of recovering: <br>
1. ls -la gluster/filename (for each file) <br>
makes files visible again but does not seem to guarantee that
a synchronization has completed.<br>
2. find gluster/filename | xargs tail -c 1 > /dev/null
2>&1 <br>
seems to do the same as step 1 but files now appear to be
fully synchronized on command completion.<br>
3. gluster volume heal gv0 full <br>
performs a full synchronization of the nodes without the
drawback mentioned for step 1 and 2 but it is asynchronous which is
not what we want. <br>
<br>
<br>
<br>
The questions are as follow:<br>
1.Why has all files gone in the directory gluster after step7 on
brick2?<br>
2. If the gluster has a synchronization command which achieve the
same function as "<span><span style="background-color:inherit">gluster
volume heal gv0 full</span></span>"?<br>
<br>
<br>
<br>
Test was carried out on 3.6.x and different versions of 3.7 with
3.7.6 being the latest version tested.<br>
<br>
<br>
<br>
<br>
<br>
<br>
Thanks,<br>
Xin<br>
</body>
</html>