<div dir="ltr"><br><div class="gmail_extra"><div><div><div dir="ltr"><div><span><font color="#888888"><font><font size="1"><span style="color:rgb(51,51,51)"><span style="color:rgb(153,153,153)"><a value="+17086132426"><font color="#888888"><font size="1"><br></font></font></a></span></span></font></font></font></span></div></div></div></div>
<br><div class="gmail_quote">On Thu, May 5, 2016 at 3:28 AM, Serkan Çoban <span dir="ltr"><<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">Hi,<br>
<br>
You can find the output below link:<br>
<a href="https://www.dropbox.com/s/wzrh5yp494ogksc/status_detail.txt?dl=0" rel="noreferrer" target="_blank">https://www.dropbox.com/s/wzrh5yp494ogksc/status_detail.txt?dl=0</a><br>
<br>
Thanks,<br>
Serkan<br></blockquote><div><br></div><div>Maybe not issue, but playing one of these things is not like the other I notice of all the bricks only one seems to be different at a quick glance</div><div><br></div><div><pre style="margin-top:0px;margin-bottom:0px;padding:10px;white-space:pre-wrap;word-wrap:break-word;color:rgb(0,0,0);font-size:12px">Brick : Brick 1.1.1.235:/bricks/20
TCP Port : 49170
RDMA Port : 0
Online : Y
Pid : 26736
File System : ext4
Device : /dev/mapper/vol0-vol_root
Mount Options : rw,relatime,data=ordered
Inode Size : 256
Disk Space Free : 86.1GB
Total Disk Space : 96.0GB
Inode Count : 6406144
Free Inodes : 6381374 </pre><pre style="margin-top:0px;margin-bottom:0px;padding:10px;white-space:pre-wrap;word-wrap:break-word;color:rgb(0,0,0);font-size:12px">Every other brick seems to be 7TB and xfs but this one.</pre></div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
<br>
On Thu, May 5, 2016 at 9:33 AM, Xavier Hernandez <<a href="mailto:xhernandez@datalab.es" target="_blank">xhernandez@datalab.es</a>> wrote:<br>
> Can you post the result of 'gluster volume status v0 detail' ?<br>
><br>
><br>
> On 05/05/16 06:49, Serkan Çoban wrote:<br>
>><br>
>> Hi, Can anyone suggest something for this issue? df, du has no issue<br>
>> for the bricks yet one subvolume not being used by gluster..<br>
>><br>
>> On Wed, May 4, 2016 at 4:40 PM, Serkan Çoban <<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a>><br>
>> wrote:<br>
>>><br>
>>> Hi,<br>
>>><br>
>>> I changed cluster.min-free-inodes to "0". Remount the volume on<br>
>>> clients. inode full messages not coming to syslog anymore but I see<br>
>>> disperse-56 subvolume still not being used.<br>
>>> Anything I can do to resolve this issue? Maybe I can destroy and<br>
>>> recreate the volume but I am not sure It will fix this issue...<br>
>>> Maybe the disperse size 16+4 is too big should I change it to 8+2?<br>
>>><br>
>>> On Tue, May 3, 2016 at 2:36 PM, Serkan Çoban <<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a>><br>
>>> wrote:<br>
>>>><br>
>>>> I also checked the df output all 20 bricks are same like below:<br>
>>>> /dev/sdu1 7.3T 34M 7.3T 1% /bricks/20<br>
>>>><br>
>>>> On Tue, May 3, 2016 at 1:40 PM, Raghavendra G <<a href="mailto:raghavendra@gluster.com" target="_blank">raghavendra@gluster.com</a>><br>
>>>> wrote:<br>
>>>>><br>
>>>>><br>
>>>>><br>
>>>>> On Mon, May 2, 2016 at 11:41 AM, Serkan Çoban <<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a>><br>
>>>>> wrote:<br>
>>>>>><br>
>>>>>><br>
>>>>>>> 1. What is the out put of du -hs <back-end-export>? Please get this<br>
>>>>>>> information for each of the brick that are part of disperse.<br>
>>>>><br>
>>>>><br>
>>>>><br>
>>>>> Sorry. I needed df output of the filesystem containing brick. Not du.<br>
>>>>> Sorry<br>
>>>>> about that.<br>
>>>>><br>
>>>>>><br>
>>>>>> There are 20 bricks in disperse-56 and the du -hs output is like:<br>
>>>>>> 80K /bricks/20<br>
>>>>>> 80K /bricks/20<br>
>>>>>> 80K /bricks/20<br>
>>>>>> 80K /bricks/20<br>
>>>>>> 80K /bricks/20<br>
>>>>>> 80K /bricks/20<br>
>>>>>> 80K /bricks/20<br>
>>>>>> 80K /bricks/20<br>
>>>>>> 1.8M /bricks/20<br>
>>>>>> 80K /bricks/20<br>
>>>>>> 80K /bricks/20<br>
>>>>>> 80K /bricks/20<br>
>>>>>> 80K /bricks/20<br>
>>>>>> 80K /bricks/20<br>
>>>>>> 80K /bricks/20<br>
>>>>>> 80K /bricks/20<br>
>>>>>> 80K /bricks/20<br>
>>>>>> 80K /bricks/20<br>
>>>>>> 80K /bricks/20<br>
>>>>>> 80K /bricks/20<br>
>>>>>><br>
>>>>>> I see that gluster is not writing to this disperse set. All other<br>
>>>>>> disperse sets are filled 13GB but this one is empty. I see directory<br>
>>>>>> structure created but no files in directories.<br>
>>>>>> How can I fix the issue? I will try to rebalance but I don't think it<br>
>>>>>> will write to this disperse set...<br>
>>>>>><br>
>>>>>><br>
>>>>>><br>
>>>>>> On Sat, Apr 30, 2016 at 9:22 AM, Raghavendra G<br>
>>>>>> <<a href="mailto:raghavendra@gluster.com" target="_blank">raghavendra@gluster.com</a>><br>
>>>>>> wrote:<br>
>>>>>>><br>
>>>>>>><br>
>>>>>>><br>
>>>>>>> On Fri, Apr 29, 2016 at 12:32 AM, Serkan Çoban<br>
>>>>>>> <<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a>><br>
>>>>>>> wrote:<br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>> Hi, I cannot get an answer from user list, so asking to devel list.<br>
>>>>>>>><br>
>>>>>>>> I am getting [dht-diskusage.c:277:dht_is_subvol_filled] 0-v0-dht:<br>
>>>>>>>> inodes on subvolume 'v0-disperse-56' are at (100.00 %), consider<br>
>>>>>>>> adding more bricks.<br>
>>>>>>>><br>
>>>>>>>> message on client logs.My cluster is empty there are only a couple<br>
>>>>>>>> of<br>
>>>>>>>> GB files for testing. Why this message appear in syslog?<br>
>>>>>>><br>
>>>>>>><br>
>>>>>>><br>
>>>>>>> dht uses disk usage information from backend export.<br>
>>>>>>><br>
>>>>>>> 1. What is the out put of du -hs <back-end-export>? Please get this<br>
>>>>>>> information for each of the brick that are part of disperse.<br>
>>>>>>> 2. Once you get du information from each brick, the value seen by dht<br>
>>>>>>> will<br>
>>>>>>> be based on how cluster/disperse aggregates du info (basically statfs<br>
>>>>>>> fop).<br>
>>>>>>><br>
>>>>>>> The reason for 100% disk usage may be,<br>
>>>>>>> In case of 1, backend fs might be shared by data other than brick.<br>
>>>>>>> In case of 2, some issues with aggregation.<br>
>>>>>>><br>
>>>>>>>> Is is safe to<br>
>>>>>>>> ignore it?<br>
>>>>>>><br>
>>>>>>><br>
>>>>>>><br>
>>>>>>> dht will try not to have data files on the subvol in question<br>
>>>>>>> (v0-disperse-56). Hence lookup cost will be two hops for files<br>
>>>>>>> hashing<br>
>>>>>>> to<br>
>>>>>>> disperse-56 (note that other fops like read/write/open still have the<br>
>>>>>>> cost<br>
>>>>>>> of single hop and dont suffer from this penalty). Other than that<br>
>>>>>>> there<br>
>>>>>>> is<br>
>>>>>>> no significant harm unless disperse-56 is really running out of<br>
>>>>>>> space.<br>
>>>>>>><br>
>>>>>>> regards,<br>
>>>>>>> Raghavendra<br>
>>>>>>><br>
>>>>>>>> _______________________________________________<br>
>>>>>>>> Gluster-devel mailing list<br>
>>>>>>>> <a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
>>>>>>>> <a href="http://www.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-devel</a><br>
>>>>>>><br>
>>>>>>><br>
>>>>>>><br>
>>>>>>><br>
>>>>>>><br>
>>>>>>> --<br>
>>>>>>> Raghavendra G<br>
>>>>>><br>
>>>>>> _______________________________________________<br>
>>>>>> Gluster-devel mailing list<br>
>>>>>> <a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
>>>>>> <a href="http://www.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-devel</a><br>
>>>>><br>
>>>>><br>
>>>>><br>
>>>>><br>
>>>>><br>
>>>>> --<br>
>>>>> Raghavendra G<br>
>><br>
>> _______________________________________________<br>
>> Gluster-users mailing list<br>
>> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
>> <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
>><br>
><br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a></blockquote></div><br></div></div>