<div dir="ltr"><div>Hello Vijay, Atin and Avra,<br></div><div>Thanks a lot for your advises.</div><div><br></div><div>Because users are using, currently I cannot stop our server but I'm planning to restart the suspicious host gluster13.</div><div><br></div><div><div>I use the glusterd version 3.6.1 for all servers and 3.6.0.29 for clients.</div><div>The OS is CentOS 6.6.</div><div>In /var/lib/glusterd/<a href="http://glusterd.info">glusterd.info</a>, “operating-version=1” is found for all server hosts.</div></div><div><br></div><div><div>I don’t know how should I check the .cmd_log_history,</div><div>but I tried to find “volume status” in the log as following</div><div>$ grep ‘volume status’ .cmd_log_history</div><div>Basically I could find “volume status : SUCESS” for almost all node excepted with gluster13. In .cmd_log_history in gluster13, I found “volume status testvol : FAILED : Locking failed on gluster13. Please check log file for details.”</div><div><br></div><div>Best regards,</div><div>Kondo</div><div> </div></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">2015-04-21 18:27 GMT+09:00 Atin Mukherjee <span dir="ltr"><<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br>
<br>
On 04/21/2015 02:47 PM, Avra Sengupta wrote:<br>
> In the logs I see, glusterd_lock() being used. This api is called only<br>
> in older versions of gluster or if you have a cluster version is less<br>
> then 30600. So along with the version of glusterfs used, could you also<br>
> let us know what is the cluster version. You can check it as<br>
> "operating-version" in /var/lib/glusterd/<a href="http://glusterd.info" target="_blank">glusterd.info</a> file.<br>
</span>Additionally please check whether concurrent volume operations were<br>
triggered by checking .cmd_log_history across all the nodes, if so, this<br>
could result into stale locks.<br>
<br>
~Atin<br>
<div><div class="h5">><br>
> Regards,<br>
> Avra<br>
><br>
> On 04/21/2015 02:34 PM, Avra Sengupta wrote:<br>
>> Hi Kondo,<br>
>><br>
>> Can u also mention the version of gluster you are using.<br>
>><br>
>> +Adding gluster-users<br>
>><br>
>> Regards,<br>
>> Avra<br>
>> On 04/21/2015 02:27 PM, Avra Sengupta wrote:<br>
>>> Hi Kondo,<br>
>>><br>
>>> I went through the gluster13 logs you had sent. Seems like something<br>
>>> on that machine is holding the lock and is not releasing it. There<br>
>>> are ways in which the system might end up in this scenario. I will<br>
>>> try and explain the same with an example.<br>
>>><br>
>>> Let's say I have gluster 11, gluster12, and gluster 13 in my cluster.<br>
>>> I initiate a command from gluster11. Now the first thing that command<br>
>>> does is, it holds a lock on all the nodes in the cluster on behalf of<br>
>>> gluster11. Once the command does what's intended, it's last act<br>
>>> before ending is to unlock all the nodes in the cluster. Now, only<br>
>>> the node that has issued the lock, can issue the unlock.<br>
>>><br>
>>> In your case what has happened is some command after successfully<br>
>>> acquired the lock on gluster13. Now the node which initiated the<br>
>>> command, went down or glusterd on that node went down before it could<br>
>>> complete the command and it never got to send the unlock to gluster13.<br>
>>><br>
>>> There's a workaround to it. You can restart glusterd on gluster13 and<br>
>>> it should work fine.<br>
>>><br>
>>> Regards,<br>
>>> Avra<br>
>>><br>
>>> On 04/20/2015 06:55 PM, kenji kondo wrote:<br>
>>>> Hello Vijay,<br>
>>>> Maybe this is very rare case. But is there any idea?<br>
>>>><br>
>>>> Thanks,<br>
>>>> Kondo<br>
>>>><br>
>>>> 2015-04-15 9:47 GMT+09:00 Vijaikumar M <<a href="mailto:vmallika@redhat.com">vmallika@redhat.com</a><br>
</div></div>>>>> <mailto:<a href="mailto:vmallika@redhat.com">vmallika@redhat.com</a>>>:<br>
<span class="">>>>><br>
>>>> Adding Avra...<br>
>>>><br>
>>>> Thanks,<br>
>>>> Vijay<br>
>>>><br>
>>>><br>
>>>> -------- Forwarded Message --------<br>
>>>> Subject: Re: [Gluster-users] Quota trouble<br>
>>>> Date: Wed, 15 Apr 2015 00:27:26 +0900<br>
>>>> From: kenji kondo <<a href="mailto:kkay.jp@gmail.com">kkay.jp@gmail.com</a>><br>
</span><span class="">>>>> <mailto:<a href="mailto:kkay.jp@gmail.com">kkay.jp@gmail.com</a>><br>
>>>> To: Vijaikumar M <<a href="mailto:vmallika@redhat.com">vmallika@redhat.com</a>><br>
</span><div><div class="h5">>>>> <mailto:<a href="mailto:vmallika@redhat.com">vmallika@redhat.com</a>><br>
>>>><br>
>>>><br>
>>>><br>
>>>> Hi Vijay,<br>
>>>><br>
>>>> Thanks for your comments.<br>
>>>><br>
>>>><br>
>>>> The lock error occurs at one server it's called "gluster13".<br>
>>>><br>
>>>> In the gluster13, I tried to create new volume and start quota.<br>
>>>> But it failed as below,<br>
>>>><br>
>>>><br>
>>>> In both host gluster10 and gluster13, ran below<br>
>>>><br>
>>>> $ sudo mkdir /export11/testbrick1<br>
>>>><br>
>>>> $ sudo mkdir /export11/testbrick2<br>
>>>><br>
>>>> In gluster13, ran below<br>
>>>><br>
>>>> $ sudo /usr/sbin/gluster volume create testvol2<br>
>>>> gluster13:/export11/testbrick1 gluster13:/export11/testbrick2<br>
>>>><br>
>>>> volume create: testvol2: failed: Locking failed on gluster13.<br>
>>>> Please check log file for details.<br>
>>>><br>
>>>> $ sudo /usr/sbin/gluster volume create testvol2<br>
>>>> gluster10:/export11/testbrick1 gluster10:/export11/testbrick2<br>
>>>><br>
>>>> volume create: testvol2: failed: Locking failed on gluster13.<br>
>>>> Please check log file for details.<br>
>>>><br>
>>>> But I recived error messages above.<br>
>>>><br>
>>>> On the other hand, in gluster10, it was success.<br>
>>>><br>
>>>> Again, in gluster13, I tried to run quota, but it failed as below.<br>
>>>><br>
>>>> $ sudo /usr/sbin/gluster volume quota testvol2 enable<br>
>>>><br>
>>>> quota command failed : Locking failed on gluster13. Please check<br>
>>>> log file for details.<br>
>>>><br>
>>>><br>
>>>> Could you find attached?<br>
>>>><br>
>>>> We can find error messages in the log of gluster13.<br>
>>>><br>
>>>><br>
>>>> Best regards,<br>
>>>><br>
>>>> Kondo<br>
>>>><br>
>>>><br>
>>>><br>
>>>> 2015-04-13 19:38 GMT+09:00 Vijaikumar M <<a href="mailto:vmallika@redhat.com">vmallika@redhat.com</a><br>
</div></div>>>>> <mailto:<a href="mailto:vmallika@redhat.com">vmallika@redhat.com</a>>>:<br>
<div><div class="h5">>>>><br>
>>>> Hi Kondo,<br>
>>>><br>
>>>> The lock error you mentioned is because, another operation<br>
>>>> is still running on the volume and hence not able to acquire<br>
>>>> the lock.<br>
>>>> This is bug of not displaying proper error message, we are<br>
>>>> working on fixing this issue.<br>
>>>><br>
>>>> I was not able to find any clue on why quotad is not running.<br>
>>>><br>
>>>> I wanted to check, if we can manually start quotad something<br>
>>>> like below:<br>
>>>><br>
>>>> # /usr/local/sbin/glusterfs -s localhost --volfile-id<br>
>>>> gluster/quotad -p /var/lib/glusterd/quotad/run/quotad.pid -l<br>
>>>> /var/log/glusterfs/quotad.log -S<br>
>>>> /var/run/gluster/myquotad.socket --xlator-option<br>
>>>> *replicate*.data-self-heal=off --xlator-option<br>
>>>> *replicate*.metadata-self-heal=off --xlator-option<br>
>>>> *replicate*.entry-self-heal=off<br>
>>>><br>
>>>> or<br>
>>>><br>
>>>> create a new temporary volume, and enable quota on this<br>
>>>> volume. (quotad will be same for all the volume which has<br>
>>>> quota enabled)<br>
>>>><br>
>>>><br>
>>>> Thanks,<br>
>>>> Vijay<br>
>>>><br>
>>>><br>
>>>> On Sunday 12 April 2015 07:05 PM, kenji kondo wrote:<br>
>>>>> Hi Vijay,<br>
>>>>><br>
>>>>> Thank you for your suggestion. But I'm sorry, it's<br>
>>>>> difficult to access from outside because my glusterfs<br>
>>>>> system is closed.<br>
>>>>> I will give up if there is no clue information in attached<br>
>>>>> log.<br>
>>>>><br>
>>>>> Best regards,<br>
>>>>> Kondo<br>
>>>>><br>
>>>>><br>
>>>>> 2015-04-09 15:40 GMT+09:00 Vijaikumar M<br>
</div></div>>>>>> <<a href="mailto:vmallika@redhat.com">vmallika@redhat.com</a> <mailto:<a href="mailto:vmallika@redhat.com">vmallika@redhat.com</a>>>:<br>
<div><div class="h5">>>>>><br>
>>>>><br>
>>>>><br>
>>>>> On Thursday 09 April 2015 11:58 AM, Vijaikumar M wrote:<br>
>>>>>><br>
>>>>>><br>
>>>>>> On Wednesday 08 April 2015 09:57 PM, kenji kondo wrote:<br>
>>>>>>> Hi Vijay,<br>
>>>>>>><br>
>>>>>>> I checked the all of the setting.<br>
>>>>>>> The all are 'features.quota=on' when I set quota<br>
>>>>>>> enable and the all are 'features.quota=off' when I<br>
>>>>>>> set quota disable.<br>
>>>>>>><br>
>>>>>>> But I could find new issue.<br>
>>>>>>> When I checked a volume status for all server, in one<br>
>>>>>>> of the servers I received the error message as below.<br>
>>>>>>><br>
>>>>>>> $ sudo /usr/sbin/gluster volume status testvol<br>
>>>>>>> Locking failed on gluster13. Please check log file<br>
>>>>>>> for details.<br>
>>>>>>><br>
>>>>>>> In etc-glusterfs-glusterd.vol.log of problem server,<br>
>>>>>>> I found error messages as below.<br>
>>>>>>> [2015-04-08 08:40:04.782644] I<br>
>>>>>>> [mem-pool.c:545:mem_pool_destroy] 0-management:<br>
>>>>>>> size=588 max=0 total=0<br>
>>>>>>> [2015-04-08 08:40:04.782685] I<br>
>>>>>>> [mem-pool.c:545:mem_pool_destroy] 0-management:<br>
>>>>>>> size=124 max=0 total=0<br>
>>>>>>> [2015-04-08 08:40:04.782848] W<br>
>>>>>>> [socket.c:611:__socket_rwv] 0-management: readv on<br>
>>>>>>> /var/run/14b05cd492843e6e288e290c2d63093c.socket<br>
>>>>>>> failed (Invalid arguments)<br>
>>>>>>> [2015-04-08 08:40:04.805407] I [MSGID: 106006]<br>
>>>>>>> [glusterd-handler.c:4257:__glusterd_nodesvc_rpc_notify]<br>
>>>>>>> 0-management: nfs has disconnected from glusterd.<br>
>>>>>>> [2015-04-08 08:43:02.439001] I<br>
>>>>>>><br>
>>>>>>> [glusterd-handler.c:3803:__glusterd_handle_status_volume]<br>
>>>>>>> 0-management: Received status volume req for volume<br>
>>>>>>> testvol<br>
>>>>>>> [2015-04-08 08:43:02.460581] E<br>
>>>>>>> [glusterd-utils.c:148:glusterd_lock] 0-management:<br>
>>>>>>> Unable to get lock for uuid:<br>
>>>>>>> 03a32bce-ec63-4dc3-a287-4901a55dd8c9, lock held by:<br>
>>>>>>> 03a32bce-ec63-4dc3-a287-4901a55dd8c9<br>
>>>>>>> [2015-04-08 08:43:02.460632] E<br>
>>>>>>> [glusterd-op-sm.c:6584:glusterd_op_sm] 0-management:<br>
>>>>>>> handler returned: -1<br>
>>>>>>> [2015-04-08 08:43:02.460654] E<br>
>>>>>>> [glusterd-syncop.c:105:gd_collate_errors] 0-: Locking<br>
>>>>>>> failed on gluster13. Please check log file for details.<br>
>>>>>>> [2015-04-08 08:43:02.461409] E<br>
>>>>>>> [glusterd-syncop.c:1602:gd_sync_task_begin]<br>
>>>>>>> 0-management: Locking Peers Failed.<br>
>>>>>>> [2015-04-08 08:43:43.698168] I<br>
>>>>>>><br>
>>>>>>> [glusterd-handler.c:3803:__glusterd_handle_status_volume]<br>
>>>>>>> 0-management: Received status volume req for volume<br>
>>>>>>> testvol<br>
>>>>>>> [2015-04-08 08:43:43.698813] E<br>
>>>>>>> [glusterd-utils.c:148:glusterd_lock] 0-management:<br>
>>>>>>> Unable to get lock for uuid:<br>
>>>>>>> 03a32bce-ec63-4dc3-a287-4901a55dd8c9, lock held by:<br>
>>>>>>> 03a32bce-ec63-4dc3-a287-4901a55dd8c9<br>
>>>>>>> [2015-04-08 08:43:43.698898] E<br>
>>>>>>> [glusterd-op-sm.c:6584:glusterd_op_sm] 0-management:<br>
>>>>>>> handler returned: -1<br>
>>>>>>> [2015-04-08 08:43:43.698994] E<br>
>>>>>>> [glusterd-syncop.c:105:gd_collate_errors] 0-: Locking<br>
>>>>>>> failed on gluster13. Please check log file for details.<br>
>>>>>>> [2015-04-08 08:43:43.702126] E<br>
>>>>>>> [glusterd-syncop.c:1602:gd_sync_task_begin]<br>
>>>>>>> 0-management: Locking Peers Failed.<br>
>>>>>>> [2015-04-08 08:44:01.277139] I<br>
>>>>>>><br>
>>>>>>> [glusterd-handler.c:3803:__glusterd_handle_status_volume]<br>
>>>>>>> 0-management: Received status volume req for volume<br>
>>>>>>> testvol<br>
>>>>>>> [2015-04-08 08:44:01.277560] E<br>
>>>>>>> [glusterd-utils.c:148:glusterd_lock] 0-management:<br>
>>>>>>> Unable to get lock for uuid:<br>
>>>>>>> 03a32bce-ec63-4dc3-a287-4901a55dd8c9, lock held by:<br>
>>>>>>> 03a32bce-ec63-4dc3-a287-4901a55dd8c9<br>
>>>>>>> [2015-04-08 08:44:01.277639] E<br>
>>>>>>> [glusterd-op-sm.c:6584:glusterd_op_sm] 0-management:<br>
>>>>>>> handler returned: -1<br>
>>>>>>> [2015-04-08 08:44:01.277676] E<br>
>>>>>>> [glusterd-syncop.c:105:gd_collate_errors] 0-: Locking<br>
>>>>>>> failed on gluster13. Please check log file for details.<br>
>>>>>>> [2015-04-08 08:44:01.281514] E<br>
>>>>>>> [glusterd-syncop.c:1602:gd_sync_task_begin]<br>
>>>>>>> 0-management: Locking Peers Failed.<br>
>>>>>>> [2015-04-08 08:45:42.599796] I<br>
>>>>>>><br>
>>>>>>> [glusterd-handler.c:3803:__glusterd_handle_status_volume]<br>
>>>>>>> 0-management: Received status volume req for volume<br>
>>>>>>> testvol<br>
>>>>>>> [2015-04-08 08:45:42.600343] E<br>
>>>>>>> [glusterd-utils.c:148:glusterd_lock] 0-management:<br>
>>>>>>> Unable to get lock for uuid:<br>
>>>>>>> 03a32bce-ec63-4dc3-a287-4901a55dd8c9, lock held by:<br>
>>>>>>> 03a32bce-ec63-4dc3-a287-4901a55dd8c9<br>
>>>>>>> [2015-04-08 08:45:42.600417] E<br>
>>>>>>> [glusterd-op-sm.c:6584:glusterd_op_sm] 0-management:<br>
>>>>>>> handler returned: -1<br>
>>>>>>> [2015-04-08 08:45:42.600482] E<br>
>>>>>>> [glusterd-syncop.c:105:gd_collate_errors] 0-: Locking<br>
>>>>>>> failed on gluster13. Please check log file for details.<br>
>>>>>>> [2015-04-08 08:45:42.601039] E<br>
>>>>>>> [glusterd-syncop.c:1602:gd_sync_task_begin]<br>
>>>>>>> 0-management: Locking Peers Failed.<br>
>>>>>>><br>
>>>>>>> Does this situation relate to my quota problems?<br>
>>>>>>><br>
>>>>>><br>
>>>>>> This is a glusterd different issue. Can we get the<br>
>>>>>> glusterd logs from gluster13?<br>
>>>>>> Can get access to these machines, so that we can debug<br>
>>>>>> live?<br>
>>>>>><br>
>>>>>> Thanks,<br>
>>>>>> Vijay<br>
>>>>>><br>
>>>>> Regarding quota issue, quota feature is enabled<br>
>>>>> successfully. I am wondering why quotad is not started.<br>
>>>>> If we get the access to the machine, it will be easier<br>
>>>>> to debug the issue.<br>
>>>>><br>
>>>>> Thanks,<br>
>>>>> Vijay<br>
>>>>><br>
>>>>><br>
>>>>>>><br>
>>>>>>> Best regards,<br>
>>>>>>> Kondo<br>
>>>>>>><br>
>>>>>>><br>
>>>>>>> 2015-04-08 15:14 GMT+09:00 Vijaikumar M<br>
</div></div>>>>>>>> <<a href="mailto:vmallika@redhat.com">vmallika@redhat.com</a> <mailto:<a href="mailto:vmallika@redhat.com">vmallika@redhat.com</a>>>:<br>
<span class="">>>>>>>><br>
>>>>>>> Hi Kondo,<br>
>>>>>>><br>
>>>>>>> I suspect, in one of the node quota feature is<br>
>>>>>>> not set for some reason and hence quotad is not<br>
>>>>>>> starting.<br>
>>>>>>><br>
>>>>>>> On all the nodes can you check if below option is<br>
>>>>>>> set to 'on'<br>
>>>>>>><br>
>>>>>>> # grep quota /var/lib/glusterd/vols/<volname>/info<br>
>>>>>>> features.quota=on<br>
>>>>>>><br>
>>>>>>><br>
>>>>>>> Also can I get brick logs from all the nodes?<br>
>>>>>>><br>
>>>>>>> Also can you create a temporary volume and enable<br>
>>>>>>> the quota here and see if see quota works fine<br>
>>>>>>> with this volume?<br>
>>>>>>><br>
>>>>>>><br>
>>>>>>> Thanks,<br>
>>>>>>> Vijay<br>
>>>>>>><br>
>>>>>>> On Tuesday 07 April 2015 08:34 PM, kenji kondo<br>
>>>>>>> wrote:<br>
>>>>>>>> Hi Vijay,<br>
>>>>>>>><br>
>>>>>>>> Could you find attached?<br>
>>>>>>>> I got logs of server and client.<br>
>>>>>>>> As same as before, I could not create a file<br>
>>>>>>>> after quota usage-limit setting.<br>
>>>>>>>><br>
>>>>>>>> Best regards,<br>
>>>>>>>> Kondo<br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>> 2015-04-07 18:34 GMT+09:00 Vijaikumar M<br>
</span>>>>>>>>> <<a href="mailto:vmallika@redhat.com">vmallika@redhat.com</a> <mailto:<a href="mailto:vmallika@redhat.com">vmallika@redhat.com</a>>>:<br>
<span class="">>>>>>>>><br>
>>>>>>>> Hi Konda,<br>
>>>>>>>><br>
>>>>>>>> Can we get all the log files?<br>
>>>>>>>><br>
>>>>>>>> # gluster volume quota <volname> disable<br>
>>>>>>>> # gluster volume quota <volname> enable<br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>> Now copy all the logs files.<br>
>>>>>>>><br>
>>>>>>>> Thanks,<br>
>>>>>>>> Vijay<br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>> On Tuesday 07 April 2015 12:39 PM, K.Kondo<br>
>>>>>>>> wrote:<br>
>>>>>>>>> Thank you very much ! Vijay<br>
>>>>>>>>> I want to use a quota because each volume<br>
>>>>>>>>> became too big.<br>
>>>>>>>>><br>
>>>>>>>>> Best regard<br>
>>>>>>>>> Kondo<br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>> 2015/04/07 15:18、Vijaikumar M<br>
>>>>>>>>> <<a href="mailto:vmallika@redhat.com">vmallika@redhat.com</a><br>
</span>>>>>>>>>> <mailto:<a href="mailto:vmallika@redhat.com">vmallika@redhat.com</a>>> のメッセージ:<br>
<span class="">>>>>>>>>><br>
>>>>>>>>>> Hi Kondo,<br>
>>>>>>>>>><br>
>>>>>>>>>> I couldn’t find clue from the logs. I will<br>
>>>>>>>>>> discuss about this issue with my<br>
>>>>>>>>>> colleagues today.<br>
>>>>>>>>>><br>
>>>>>>>>>><br>
>>>>>>>>>> Thanks,<br>
>>>>>>>>>> Vijay<br>
>>>>>>>>>><br>
>>>>>>>>>><br>
>>>>>>>>>><br>
>>>>>>>>>> On Monday 06 April 2015 10:56 PM, kenji<br>
>>>>>>>>>> kondo wrote:<br>
>>>>>>>>>>> Hello Vijay,<br>
>>>>>>>>>>> Is there something idea for this?<br>
>>>>>>>>>>> Best regards,<br>
>>>>>>>>>>> Kondo<br>
>>>>>>>>>>><br>
>>>>>>>>>>> 2015-03-31 22:46 GMT+09:00 kenji kondo<br>
>>>>>>>>>>> <<a href="mailto:kkay.jp@gmail.com">kkay.jp@gmail.com</a><br>
</span>>>>>>>>>>>> <mailto:<a href="mailto:kkay.jp@gmail.com">kkay.jp@gmail.com</a>>>:<br>
<span class="">>>>>>>>>>>><br>
>>>>>>>>>>> Hi Vijay,<br>
>>>>>>>>>>><br>
>>>>>>>>>>> I'm sorry for late reply.<br>
>>>>>>>>>>> I could get the debug mode log as<br>
>>>>>>>>>>> attached.<br>
>>>>>>>>>>> In this test, unfortunately the quota<br>
>>>>>>>>>>> did not work as same as before.<br>
>>>>>>>>>>><br>
>>>>>>>>>>> Could you find the cause of my problem?<br>
>>>>>>>>>>><br>
>>>>>>>>>>> Best regards,<br>
>>>>>>>>>>> Kondo<br>
>>>>>>>>>>><br>
>>>>>>>>>>><br>
>>>>>>>>>>><br>
>>>>>>>>>>> 2015-03-25 17:20 GMT+09:00 Vijaikumar<br>
>>>>>>>>>>> M <<a href="mailto:vmallika@redhat.com">vmallika@redhat.com</a><br>
</span>>>>>>>>>>>> <mailto:<a href="mailto:vmallika@redhat.com">vmallika@redhat.com</a>>>:<br>
<span class="">>>>>>>>>>>><br>
>>>>>>>>>>> Hi Kondo,<br>
>>>>>>>>>>><br>
>>>>>>>>>>> For some reason quota enable was<br>
>>>>>>>>>>> not successful. We may have<br>
>>>>>>>>>>> re-try enabling quota.<br>
>>>>>>>>>>><br>
>>>>>>>>>>><br>
>>>>>>>>>>> Thanks,<br>
>>>>>>>>>>> Vijay<br>
>>>>>>>>>>><br>
>>>>>>>>>>><br>
>>>>>>>>>>><br>
>>>>>>>>>>> On Tuesday 24 March 2015 07:08<br>
>>>>>>>>>>> PM, kenji kondo wrote:<br>
>>>>>>>>>>>> Hi Vijay,<br>
>>>>>>>>>>>> Thanks for your checking.<br>
>>>>>>>>>>>> Unfortunately, currently I can't<br>
>>>>>>>>>>>> stop the service because many<br>
>>>>>>>>>>>> users are using.<br>
>>>>>>>>>>>> But, I want to know this cause<br>
>>>>>>>>>>>> of this trouble, so I will plan<br>
>>>>>>>>>>>> to stop. Please wait to get the<br>
>>>>>>>>>>>> log.<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> Best regards,<br>
>>>>>>>>>>>> Kondo<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> 2015-03-24 17:01 GMT+09:00<br>
>>>>>>>>>>>> Vijaikumar M<br>
>>>>>>>>>>>> <<a href="mailto:vmallika@redhat.com">vmallika@redhat.com</a><br>
</span>>>>>>>>>>>>> <mailto:<a href="mailto:vmallika@redhat.com">vmallika@redhat.com</a>>>:<br>
<div><div class="h5">>>>>>>>>>>>><br>
>>>>>>>>>>>> Hi Kondo,<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> I couldn't find much clue in<br>
>>>>>>>>>>>> the glusterd logs, other<br>
>>>>>>>>>>>> than the error message you<br>
>>>>>>>>>>>> mentioned below.<br>
>>>>>>>>>>>> Can you try disabling and<br>
>>>>>>>>>>>> enabling the quota again and<br>
>>>>>>>>>>>> see if this start quotad?<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> Try below command:<br>
>>>>>>>>>>>> # gluster volume quota<br>
>>>>>>>>>>>> <volname> disable<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> wait for all quota process<br>
>>>>>>>>>>>> to terminate<br>
>>>>>>>>>>>> #ps -ef | quota<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> # service glusterd stop<br>
>>>>>>>>>>>> # glusterd -LDEBUG<br>
>>>>>>>>>>>> # gluster volume quota<br>
>>>>>>>>>>>> <volname> enable<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> Now verify if quotad is running<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> Thanks,<br>
>>>>>>>>>>>> Vijay<br>
>>>>>>>>>>>><br>
>>>>>>>>>>>><br>
>>>>>>>>>>>><br>
>>>>>>>>>>>> On Monday 23 March 2015<br>
>>>>>>>>>>>> 06:24 PM, kenji kondo wrote:<br>
>>>>>>>>>>>>> Hi Vijay,<br>
>>>>>>>>>>>>> As you pointed out, the<br>
>>>>>>>>>>>>> quotad is not running in<br>
>>>>>>>>>>>>> the all of server.<br>
>>>>>>>>>>>>> I checked the volume status<br>
>>>>>>>>>>>>> and got following log.<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> Quota Daemon on<br>
>>>>>>>>>>>>> gluster25N/ANN/A<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> So, I attached requested<br>
>>>>>>>>>>>>> log<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> 'etc-glusterfs-glusterd.vol.log'.<br>
>>>>>>>>>>>>> The error messages can be<br>
>>>>>>>>>>>>> found in the log.<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> [2015-03-19<br>
>>>>>>>>>>>>> 11:51:07.457697] E<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> [glusterd-quota.c:1467:glusterd_op_stage_quota]<br>
>>>>>>>>>>>>> 0-management: Quota is<br>
>>>>>>>>>>>>> disabled, please enable quota<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> If you want more some<br>
>>>>>>>>>>>>> information to solve this<br>
>>>>>>>>>>>>> problems, please ask me.<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> Best regards,<br>
>>>>>>>>>>>>> Kondo<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> 2015-03-23 16:04 GMT+09:00<br>
>>>>>>>>>>>>> Vijaikumar M<br>
>>>>>>>>>>>>> <<a href="mailto:vmallika@redhat.com">vmallika@redhat.com</a><br>
</div></div>>>>>>>>>>>>>> <mailto:<a href="mailto:vmallika@redhat.com">vmallika@redhat.com</a>>>:<br>
<span class="">>>>>>>>>>>>>><br>
>>>>>>>>>>>>> Hi Kondo,<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> Can you please verify<br>
>>>>>>>>>>>>> if quotad is running?<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>><br>
</span>>>>>>>>>>>>>> root@rh1:~ *# gluster<br>
>>>>>>>>>>>>> volume status*<br>
<span class="">>>>>>>>>>>>>> Status of volume: vol1<br>
>>>>>>>>>>>>> Gluster process TCP<br>
>>>>>>>>>>>>> Port RDMA Port Online<br>
>>>>>>>>>>>>> Pid<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> ------------------------------------------------------------------------------<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> Brick<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> rh1:/var/opt/gluster/bricks/b1/dir<br>
>>>>>>>>>>>>> 49152 0 Y 1858<br>
>>>>>>>>>>>>> NFS Server on localhost<br>
>>>>>>>>>>>>> 2049 0 Y 1879<br>
</span>>>>>>>>>>>>>> *Quota Daemon on<br>
>>>>>>>>>>>>> localhost N/A N/A<br>
>>>>>>>>>>>>> Y 1914 **<br>
>>>>>>>>>>>>> *<br>
<span class="">>>>>>>>>>>>>> Task Status of Volume vol1<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> ------------------------------------------------------------------------------<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> There are no active<br>
>>>>>>>>>>>>> volume tasks<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>><br>
</span>>>>>>>>>>>>>> root@rh1:~ # *ps -ef |<br>
>>>>>>>>>>>>> grep quotad*<br>
<span class="">>>>>>>>>>>>>> root 1914 1 0<br>
>>>>>>>>>>>>> 12:29 ? 00:00:00<br>
>>>>>>>>>>>>> /usr/local/sbin/glusterfs<br>
>>>>>>>>>>>>> -s localhost<br>
>>>>>>>>>>>>> --volfile-id<br>
>>>>>>>>>>>>> gluster/quotad -p<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> /var/lib/glusterd/quotad/run/quotad.pid<br>
>>>>>>>>>>>>> -l<br>
>>>>>>>>>>>>><br>
</span>>>>>>>>>>>>>> */var/log/glusterfs/quotad.log*-S<br>
<span class="">>>>>>>>>>>>>><br>
>>>>>>>>>>>>> /var/run/gluster/bb6ab82f70f555fd5c0e188fa4e09584.socket<br>
>>>>>>>>>>>>> --xlator-option<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> *replicate*.data-self-heal=off<br>
>>>>>>>>>>>>> --xlator-option<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> *replicate*.metadata-self-heal=off<br>
>>>>>>>>>>>>> --xlator-option<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> *replicate*.entry-self-heal=off<br>
>>>>>>>>>>>>> root 1970 1511 0<br>
>>>>>>>>>>>>> 12:31 pts/1 00:00:00<br>
>>>>>>>>>>>>> grep quotad<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>><br>
</span>>>>>>>>>>>>>> root@rh1:~ # *gluster<br>
>>>>>>>>>>>>> volume info*<br>
<span class="">>>>>>>>>>>>>> Volume Name: vol1<br>
>>>>>>>>>>>>> Type: Distribute<br>
>>>>>>>>>>>>> Volume ID:<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> a55519ec-65d1-4741-9ad3-f94020fc9b21<br>
>>>>>>>>>>>>> Status: Started<br>
>>>>>>>>>>>>> Number of Bricks: 1<br>
>>>>>>>>>>>>> Transport-type: tcp<br>
>>>>>>>>>>>>> Bricks:<br>
>>>>>>>>>>>>> Brick1:<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> rh1:/var/opt/gluster/bricks/b1/dir<br>
>>>>>>>>>>>>> Options Reconfigured:<br>
</span>>>>>>>>>>>>>> *features.quota: on**<br>
<div><div class="h5">>>>>>>>>>>>>> *<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> If quotad is not<br>
>>>>>>>>>>>>> running, can you please<br>
>>>>>>>>>>>>> provide glusterd logs<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> 'usr-local-etc-glusterfs-glusterd.vol.log'.<br>
>>>>>>>>>>>>> I will check is there<br>
>>>>>>>>>>>>> are any issues starting<br>
>>>>>>>>>>>>> quotad.<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> Thanks,<br>
>>>>>>>>>>>>> Vihay<br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>> On Monday 23 March 2015<br>
>>>>>>>>>>>>> 11:54 AM, K.Kondo wrote:<br>
>>>>>>>>>>>>>> Hi Vijay,<br>
>>>>>>>>>>>>>> I could not find<br>
>>>>>>>>>>>>>> the"quotad.log" in<br>
>>>>>>>>>>>>>> directory<br>
>>>>>>>>>>>>>> /var/log/glusterfs in<br>
>>>>>>>>>>>>>> both servers and<br>
>>>>>>>>>>>>>> client. But other test<br>
>>>>>>>>>>>>>> server has the log.<br>
>>>>>>>>>>>>>> Do you know why there<br>
>>>>>>>>>>>>>> is no the file?<br>
>>>>>>>>>>>>>> Thanks,<br>
>>>>>>>>>>>>>> Kondo<br>
>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>> 2015/03/23 13:41、<br>
>>>>>>>>>>>>>> Vijaikumar M<br>
>>>>>>>>>>>>>> <<a href="mailto:vmallika@redhat.com">vmallika@redhat.com</a><br>
>>>>>>>>>>>>>><br>
</div></div>>>>>>>>>>>>>>> <mailto:<a href="mailto:vmallika@redhat.com">vmallika@redhat.com</a>>><br>
<span class="">>>>>>>>>>>>>>> のメッセージ:<br>
>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>> Hi Kondo,<br>
>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>> log file 'quotad.log'<br>
>>>>>>>>>>>>>>> is missing in the<br>
</span>>>>>>>>>>>>>>>> attachment.Can you<br>
<span class="">>>>>>>>>>>>>>>> provide this log file<br>
>>>>>>>>>>>>>>> as well?<br>
>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>> Thanks,<br>
>>>>>>>>>>>>>>> Vijay<br>
>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>> On Monday 23 March<br>
>>>>>>>>>>>>>>> 2015 09:50 AM, kenji<br>
>>>>>>>>>>>>>>> kondo wrote:<br>
>>>>>>>>>>>>>>>> Hi Vijay,<br>
>>>>>>>>>>>>>>>> Could you find the<br>
>>>>>>>>>>>>>>>> attached?<br>
>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>> Best regards,<br>
>>>>>>>>>>>>>>>> Kondo<br>
>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>> 2015-03-23 12:53<br>
>>>>>>>>>>>>>>>> GMT+09:00 Vijaikumar<br>
>>>>>>>>>>>>>>>> M<br>
>>>>>>>>>>>>>>>> <<a href="mailto:vmallika@redhat.com">vmallika@redhat.com</a><br>
>>>>>>>>>>>>>>>><br>
</span>>>>>>>>>>>>>>>>> <mailto:<a href="mailto:vmallika@redhat.com">vmallika@redhat.com</a>>>:<br>
<div><div class="h5">>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>> Hi Kondo,<br>
>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>> Can you please<br>
>>>>>>>>>>>>>>>> provide below<br>
>>>>>>>>>>>>>>>> mentioned<br>
>>>>>>>>>>>>>>>> gluterfs logs?<br>
>>>>>>>>>>>>>>>> client logs<br>
>>>>>>>>>>>>>>>> (name of this<br>
>>>>>>>>>>>>>>>> log will be<br>
>>>>>>>>>>>>>>>> prefixed with<br>
>>>>>>>>>>>>>>>> mount-point<br>
>>>>>>>>>>>>>>>> dirname)<br>
>>>>>>>>>>>>>>>> brick logs<br>
>>>>>>>>>>>>>>>> quotad logs<br>
>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>> Thanks,<br>
>>>>>>>>>>>>>>>> Vijay<br>
>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>> On Friday 20<br>
>>>>>>>>>>>>>>>> March 2015 06:31<br>
>>>>>>>>>>>>>>>> PM, kenji kondo<br>
>>>>>>>>>>>>>>>> wrote:<br>
>>>>>>>>>>>>>>>>> Hi, Vijay and<br>
>>>>>>>>>>>>>>>>> Peter<br>
>>>>>>>>>>>>>>>>> Thanks for your<br>
>>>>>>>>>>>>>>>>> reply.<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> I create new<br>
>>>>>>>>>>>>>>>>> volume<br>
>>>>>>>>>>>>>>>>> "testvol" with<br>
>>>>>>>>>>>>>>>>> two bricks and<br>
>>>>>>>>>>>>>>>>> set quota to<br>
>>>>>>>>>>>>>>>>> simplify this<br>
>>>>>>>>>>>>>>>>> problem.<br>
>>>>>>>>>>>>>>>>> I got the<br>
>>>>>>>>>>>>>>>>> glusterfs log<br>
>>>>>>>>>>>>>>>>> as following<br>
>>>>>>>>>>>>>>>>> after try to<br>
>>>>>>>>>>>>>>>>> create a<br>
>>>>>>>>>>>>>>>>> directory and<br>
>>>>>>>>>>>>>>>>> file.<br>
>>>>>>>>>>>>>>>>> BTW, my<br>
>>>>>>>>>>>>>>>>> glusterd was<br>
>>>>>>>>>>>>>>>>> upgraded from<br>
>>>>>>>>>>>>>>>>> older version,<br>
>>>>>>>>>>>>>>>>> although I<br>
>>>>>>>>>>>>>>>>> don't know<br>
>>>>>>>>>>>>>>>>> related to it.<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> Best regards,<br>
>>>>>>>>>>>>>>>>> Kondo<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
>>>>>>>>>>>>>>>>> 03:42:52.931016] I<br>
>>>>>>>>>>>>>>>>> [MSGID: 100030]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [glusterfsd.c:1998:main]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 0-/usr/sbin/glusterfs:<br>
>>>>>>>>>>>>>>>>> Started running<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> /usr/sbin/glusterfs<br>
>>>>>>>>>>>>>>>>> version<br>
>>>>>>>>>>>>>>>>> 3.6.0.29 (args:<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> /usr/sbin/glusterfs<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> --volfile-server=gluster10<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> --volfile-id=testvol<br>
>>>>>>>>>>>>>>>>> testvol)<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
>>>>>>>>>>>>>>>>> 03:42:52.944850] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [dht-shared.c:337:dht_init_regex]<br>
>>>>>>>>>>>>>>>>> 0-testvol-dht:<br>
>>>>>>>>>>>>>>>>> using regex<br>
>>>>>>>>>>>>>>>>> rsync-hash-regex =<br>
>>>>>>>>>>>>>>>>> ^\.(.+)\.[^.]+$<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
>>>>>>>>>>>>>>>>> 03:42:52.946256] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [client.c:2280:notify]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 0-testvol-client-0:<br>
>>>>>>>>>>>>>>>>> parent<br>
>>>>>>>>>>>>>>>>> translators are<br>
>>>>>>>>>>>>>>>>> ready,<br>
>>>>>>>>>>>>>>>>> attempting<br>
>>>>>>>>>>>>>>>>> connect on<br>
>>>>>>>>>>>>>>>>> transport<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
>>>>>>>>>>>>>>>>> 03:42:52.950674] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [client.c:2280:notify]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 0-testvol-client-1:<br>
>>>>>>>>>>>>>>>>> parent<br>
>>>>>>>>>>>>>>>>> translators are<br>
>>>>>>>>>>>>>>>>> ready,<br>
>>>>>>>>>>>>>>>>> attempting<br>
>>>>>>>>>>>>>>>>> connect on<br>
>>>>>>>>>>>>>>>>> transport<br>
>>>>>>>>>>>>>>>>> Final graph:<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> +------------------------------------------------------------------------------+<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 1: volume<br>
>>>>>>>>>>>>>>>>> testvol-client-0<br>
>>>>>>>>>>>>>>>>> 2: type<br>
>>>>>>>>>>>>>>>>> protocol/client<br>
>>>>>>>>>>>>>>>>> 3: option<br>
>>>>>>>>>>>>>>>>> ping-timeout 42<br>
>>>>>>>>>>>>>>>>> 4: option<br>
>>>>>>>>>>>>>>>>> remote-host<br>
>>>>>>>>>>>>>>>>> gluster24<br>
>>>>>>>>>>>>>>>>> 5: option<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> remote-subvolume /export25/brick<br>
>>>>>>>>>>>>>>>>> 6: option<br>
>>>>>>>>>>>>>>>>> transport-type<br>
>>>>>>>>>>>>>>>>> socket<br>
>>>>>>>>>>>>>>>>> 7: option<br>
>>>>>>>>>>>>>>>>> send-gids true<br>
>>>>>>>>>>>>>>>>> 8: end-volume<br>
>>>>>>>>>>>>>>>>> 9:<br>
>>>>>>>>>>>>>>>>> 10: volume<br>
>>>>>>>>>>>>>>>>> testvol-client-1<br>
>>>>>>>>>>>>>>>>> 11: type<br>
>>>>>>>>>>>>>>>>> protocol/client<br>
>>>>>>>>>>>>>>>>> 12: option<br>
>>>>>>>>>>>>>>>>> ping-timeout 42<br>
>>>>>>>>>>>>>>>>> 13: option<br>
>>>>>>>>>>>>>>>>> remote-host<br>
>>>>>>>>>>>>>>>>> gluster25<br>
>>>>>>>>>>>>>>>>> 14: option<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> remote-subvolume /export25/brick<br>
>>>>>>>>>>>>>>>>> 15: option<br>
>>>>>>>>>>>>>>>>> transport-type<br>
>>>>>>>>>>>>>>>>> socket<br>
>>>>>>>>>>>>>>>>> 16: option<br>
>>>>>>>>>>>>>>>>> send-gids true<br>
>>>>>>>>>>>>>>>>> 17: end-volume<br>
>>>>>>>>>>>>>>>>> 18:<br>
>>>>>>>>>>>>>>>>> 19: volume<br>
>>>>>>>>>>>>>>>>> testvol-dht<br>
>>>>>>>>>>>>>>>>> 20: type<br>
>>>>>>>>>>>>>>>>> cluster/distribute<br>
>>>>>>>>>>>>>>>>> 21: subvolumes<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> testvol-client-0 testvol-client-1<br>
>>>>>>>>>>>>>>>>> 22: end-volume<br>
>>>>>>>>>>>>>>>>> 23:<br>
>>>>>>>>>>>>>>>>> 24: volume<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> testvol-write-behind<br>
>>>>>>>>>>>>>>>>> 25: type<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> performance/write-behind<br>
>>>>>>>>>>>>>>>>> 26: subvolumes<br>
>>>>>>>>>>>>>>>>> testvol-dht<br>
>>>>>>>>>>>>>>>>> 27: end-volume<br>
>>>>>>>>>>>>>>>>> 28:<br>
>>>>>>>>>>>>>>>>> 29: volume<br>
>>>>>>>>>>>>>>>>> testvol-read-ahead<br>
>>>>>>>>>>>>>>>>> 30: type<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> performance/read-ahead<br>
>>>>>>>>>>>>>>>>> 31: subvolumes<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> testvol-write-behind<br>
>>>>>>>>>>>>>>>>> 32: end-volume<br>
>>>>>>>>>>>>>>>>> 33:<br>
>>>>>>>>>>>>>>>>> 34: volume<br>
>>>>>>>>>>>>>>>>> testvol-io-cache<br>
>>>>>>>>>>>>>>>>> 35: type<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> performance/io-cache<br>
>>>>>>>>>>>>>>>>> 36: subvolumes<br>
>>>>>>>>>>>>>>>>> testvol-read-ahead<br>
</div></div><span class="">>>>>>>>>>>>>>>>>> 37: end-volume<br>
>>>>>>>>>>>>>>>>> 38:<br>
>>>>>>>>>>>>>>>>> 39: volume<br>
>>>>>>>>>>>>>>>>> testvol-quick-read<br>
>>>>>>>>>>>>>>>>> 40: type<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> performance/quick-read<br>
>>>>>>>>>>>>>>>>> 41: subvolumes<br>
>>>>>>>>>>>>>>>>> testvol-io-cache<br>
>>>>>>>>>>>>>>>>> 42: end-volume<br>
>>>>>>>>>>>>>>>>> 43:<br>
>>>>>>>>>>>>>>>>> 44: volume<br>
>>>>>>>>>>>>>>>>> testvol-md-cache<br>
>>>>>>>>>>>>>>>>> 45: type<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> performance/md-cache<br>
>>>>>>>>>>>>>>>>> 46: subvolumes<br>
>>>>>>>>>>>>>>>>> testvol-quick-read<br>
>>>>>>>>>>>>>>>>> 47: end-volume<br>
>>>>>>>>>>>>>>>>> 48:<br>
>>>>>>>>>>>>>>>>> 49: volume<br>
>>>>>>>>>>>>>>>>> testvol<br>
>>>>>>>>>>>>>>>>> 50: type<br>
>>>>>>>>>>>>>>>>> debug/io-stats<br>
</span><span class="">>>>>>>>>>>>>>>>>> 51: option<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> latency-measurement<br>
>>>>>>>>>>>>>>>>> off<br>
>>>>>>>>>>>>>>>>> 52: option<br>
>>>>>>>>>>>>>>>>> count-fop-hits off<br>
>>>>>>>>>>>>>>>>> 53: subvolumes<br>
>>>>>>>>>>>>>>>>> testvol-md-cache<br>
>>>>>>>>>>>>>>>>> 54: end-volume<br>
>>>>>>>>>>>>>>>>> 55:<br>
>>>>>>>>>>>>>>>>> 56: volume<br>
>>>>>>>>>>>>>>>>> meta-autoload<br>
>>>>>>>>>>>>>>>>> 57: type meta<br>
>>>>>>>>>>>>>>>>> 58: subvolumes<br>
>>>>>>>>>>>>>>>>> testvol<br>
>>>>>>>>>>>>>>>>> 59: end-volume<br>
>>>>>>>>>>>>>>>>> 60:<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> +------------------------------------------------------------------------------+<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
>>>>>>>>>>>>>>>>> 03:42:52.955337] I<br>
>>>>>>>>>>>>>>>>><br>
</span><div><div class="h5">>>>>>>>>>>>>>>>>> [rpc-clnt.c:1759:rpc_clnt_reconfig]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 0-testvol-client-0:<br>
>>>>>>>>>>>>>>>>> changing port<br>
>>>>>>>>>>>>>>>>> to 49155 (from 0)<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
>>>>>>>>>>>>>>>>> 03:42:52.957549] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [rpc-clnt.c:1759:rpc_clnt_reconfig]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 0-testvol-client-1:<br>
>>>>>>>>>>>>>>>>> changing port<br>
>>>>>>>>>>>>>>>>> to 49155 (from 0)<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
>>>>>>>>>>>>>>>>> 03:42:52.959889] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [client-handshake.c:1415:select_server_supported_programs]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 0-testvol-client-0:<br>
>>>>>>>>>>>>>>>>> Using Program<br>
>>>>>>>>>>>>>>>>> GlusterFS 3.3,<br>
>>>>>>>>>>>>>>>>> Num (1298437),<br>
>>>>>>>>>>>>>>>>> Version (330)<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
>>>>>>>>>>>>>>>>> 03:42:52.960090] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [client-handshake.c:1415:select_server_supported_programs]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 0-testvol-client-1:<br>
>>>>>>>>>>>>>>>>> Using Program<br>
>>>>>>>>>>>>>>>>> GlusterFS 3.3,<br>
>>>>>>>>>>>>>>>>> Num (1298437),<br>
>>>>>>>>>>>>>>>>> Version (330)<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
>>>>>>>>>>>>>>>>> 03:42:52.960376] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [client-handshake.c:1200:client_setvolume_cbk]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 0-testvol-client-0:<br>
>>>>>>>>>>>>>>>>> Connected to<br>
>>>>>>>>>>>>>>>>> testvol-client-0,<br>
>>>>>>>>>>>>>>>>> attached to<br>
>>>>>>>>>>>>>>>>> remote volume<br>
>>>>>>>>>>>>>>>>> '/export25/brick'.<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
>>>>>>>>>>>>>>>>> 03:42:52.960405] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [client-handshake.c:1212:client_setvolume_cbk]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 0-testvol-client-0:<br>
>>>>>>>>>>>>>>>>> Server and<br>
>>>>>>>>>>>>>>>>> Client<br>
>>>>>>>>>>>>>>>>> lk-version<br>
>>>>>>>>>>>>>>>>> numbers are not<br>
>>>>>>>>>>>>>>>>> same, reopening<br>
>>>>>>>>>>>>>>>>> the fds<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
>>>>>>>>>>>>>>>>> 03:42:52.960471] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [client-handshake.c:1200:client_setvolume_cbk]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 0-testvol-client-1:<br>
>>>>>>>>>>>>>>>>> Connected to<br>
>>>>>>>>>>>>>>>>> testvol-client-1,<br>
>>>>>>>>>>>>>>>>> attached to<br>
>>>>>>>>>>>>>>>>> remote volume<br>
>>>>>>>>>>>>>>>>> '/export25/brick'.<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
>>>>>>>>>>>>>>>>> 03:42:52.960478] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [client-handshake.c:1212:client_setvolume_cbk]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 0-testvol-client-1:<br>
>>>>>>>>>>>>>>>>> Server and<br>
>>>>>>>>>>>>>>>>> Client<br>
>>>>>>>>>>>>>>>>> lk-version<br>
>>>>>>>>>>>>>>>>> numbers are not<br>
>>>>>>>>>>>>>>>>> same, reopening<br>
>>>>>>>>>>>>>>>>> the fds<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
>>>>>>>>>>>>>>>>> 03:42:52.962288] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [fuse-bridge.c:5042:fuse_graph_setup]<br>
>>>>>>>>>>>>>>>>> 0-fuse:<br>
>>>>>>>>>>>>>>>>> switched to<br>
>>>>>>>>>>>>>>>>> graph 0<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
>>>>>>>>>>>>>>>>> 03:42:52.962351] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [client-handshake.c:188:client_set_lk_version_cbk]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 0-testvol-client-1:<br>
>>>>>>>>>>>>>>>>> Server lk<br>
>>>>>>>>>>>>>>>>> version = 1<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
>>>>>>>>>>>>>>>>> 03:42:52.962362] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [client-handshake.c:188:client_set_lk_version_cbk]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 0-testvol-client-0:<br>
>>>>>>>>>>>>>>>>> Server lk<br>
>>>>>>>>>>>>>>>>> version = 1<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
>>>>>>>>>>>>>>>>> 03:42:52.962424] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [fuse-bridge.c:3971:fuse_init]<br>
>>>>>>>>>>>>>>>>> 0-glusterfs-fuse:<br>
>>>>>>>>>>>>>>>>> FUSE inited<br>
>>>>>>>>>>>>>>>>> with protocol<br>
>>>>>>>>>>>>>>>>> versions:<br>
>>>>>>>>>>>>>>>>> glusterfs 7.22<br>
>>>>>>>>>>>>>>>>> kernel 7.14<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
>>>>>>>>>>>>>>>>> 03:47:13.352234] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [glusterfsd-mgmt.c:56:mgmt_cbk_spec]<br>
>>>>>>>>>>>>>>>>> 0-mgmt: Volume<br>
>>>>>>>>>>>>>>>>> file changed<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
>>>>>>>>>>>>>>>>> 03:47:15.518667] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [dht-shared.c:337:dht_init_regex]<br>
>>>>>>>>>>>>>>>>> 2-testvol-dht:<br>
</div></div><span class="">>>>>>>>>>>>>>>>>> using regex<br>
>>>>>>>>>>>>>>>>> rsync-hash-regex =<br>
>>>>>>>>>>>>>>>>> ^\.(.+)\.[^.]+$<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
</span><span class="">>>>>>>>>>>>>>>>>> 03:47:15.520034] W<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [graph.c:344:_log_if_unknown_option]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 2-testvol-quota: option<br>
>>>>>>>>>>>>>>>>> 'timeout' is<br>
>>>>>>>>>>>>>>>>> not recognized<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
>>>>>>>>>>>>>>>>> 03:47:15.520091] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [client.c:2280:notify]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 2-testvol-client-0:<br>
</span><span class="">>>>>>>>>>>>>>>>>> parent<br>
>>>>>>>>>>>>>>>>> translators are<br>
>>>>>>>>>>>>>>>>> ready,<br>
>>>>>>>>>>>>>>>>> attempting<br>
>>>>>>>>>>>>>>>>> connect on<br>
>>>>>>>>>>>>>>>>> transport<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
</span><span class="">>>>>>>>>>>>>>>>>> 03:47:15.524546] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [client.c:2280:notify]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 2-testvol-client-1:<br>
</span><div><div class="h5">>>>>>>>>>>>>>>>>> parent<br>
>>>>>>>>>>>>>>>>> translators are<br>
>>>>>>>>>>>>>>>>> ready,<br>
>>>>>>>>>>>>>>>>> attempting<br>
>>>>>>>>>>>>>>>>> connect on<br>
>>>>>>>>>>>>>>>>> transport<br>
>>>>>>>>>>>>>>>>> Final graph:<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> +------------------------------------------------------------------------------+<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 1: volume<br>
>>>>>>>>>>>>>>>>> testvol-client-0<br>
>>>>>>>>>>>>>>>>> 2: type<br>
>>>>>>>>>>>>>>>>> protocol/client<br>
>>>>>>>>>>>>>>>>> 3: option<br>
>>>>>>>>>>>>>>>>> ping-timeout 42<br>
>>>>>>>>>>>>>>>>> 4: option<br>
>>>>>>>>>>>>>>>>> remote-host<br>
>>>>>>>>>>>>>>>>> gluster24<br>
>>>>>>>>>>>>>>>>> 5: option<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> remote-subvolume /export25/brick<br>
>>>>>>>>>>>>>>>>> 6: option<br>
>>>>>>>>>>>>>>>>> transport-type<br>
>>>>>>>>>>>>>>>>> socket<br>
>>>>>>>>>>>>>>>>> 7: option<br>
>>>>>>>>>>>>>>>>> send-gids true<br>
>>>>>>>>>>>>>>>>> 8: end-volume<br>
>>>>>>>>>>>>>>>>> 9:<br>
>>>>>>>>>>>>>>>>> 10: volume<br>
>>>>>>>>>>>>>>>>> testvol-client-1<br>
>>>>>>>>>>>>>>>>> 11: type<br>
>>>>>>>>>>>>>>>>> protocol/client<br>
>>>>>>>>>>>>>>>>> 12: option<br>
>>>>>>>>>>>>>>>>> ping-timeout 42<br>
>>>>>>>>>>>>>>>>> 13: option<br>
>>>>>>>>>>>>>>>>> remote-host<br>
>>>>>>>>>>>>>>>>> gluster25<br>
>>>>>>>>>>>>>>>>> 14: option<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> remote-subvolume /export25/brick<br>
>>>>>>>>>>>>>>>>> 15: option<br>
>>>>>>>>>>>>>>>>> transport-type<br>
>>>>>>>>>>>>>>>>> socket<br>
>>>>>>>>>>>>>>>>> 16: option<br>
>>>>>>>>>>>>>>>>> send-gids true<br>
>>>>>>>>>>>>>>>>> 17: end-volume<br>
>>>>>>>>>>>>>>>>> 18:<br>
>>>>>>>>>>>>>>>>> 19: volume<br>
>>>>>>>>>>>>>>>>> testvol-dht<br>
>>>>>>>>>>>>>>>>> 20: type<br>
>>>>>>>>>>>>>>>>> cluster/distribute<br>
>>>>>>>>>>>>>>>>> 21: subvolumes<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> testvol-client-0 testvol-client-1<br>
>>>>>>>>>>>>>>>>> 22: end-volume<br>
>>>>>>>>>>>>>>>>> 23:<br>
>>>>>>>>>>>>>>>>> 24: volume<br>
</div></div><div><div class="h5">>>>>>>>>>>>>>>>>> testvol-quota<br>
>>>>>>>>>>>>>>>>> 25: type<br>
>>>>>>>>>>>>>>>>> features/quota<br>
>>>>>>>>>>>>>>>>> 26: option<br>
>>>>>>>>>>>>>>>>> timeout 0<br>
>>>>>>>>>>>>>>>>> 27: option<br>
>>>>>>>>>>>>>>>>> deem-statfs off<br>
>>>>>>>>>>>>>>>>> 28: subvolumes<br>
>>>>>>>>>>>>>>>>> testvol-dht<br>
>>>>>>>>>>>>>>>>> 29: end-volume<br>
>>>>>>>>>>>>>>>>> 30:<br>
>>>>>>>>>>>>>>>>> 31: volume<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> testvol-write-behind<br>
>>>>>>>>>>>>>>>>> 32: type<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> performance/write-behind<br>
>>>>>>>>>>>>>>>>> 33: subvolumes<br>
>>>>>>>>>>>>>>>>> testvol-quota<br>
>>>>>>>>>>>>>>>>> 34: end-volume<br>
>>>>>>>>>>>>>>>>> 35:<br>
>>>>>>>>>>>>>>>>> 36: volume<br>
>>>>>>>>>>>>>>>>> testvol-read-ahead<br>
>>>>>>>>>>>>>>>>> 37: type<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> performance/read-ahead<br>
>>>>>>>>>>>>>>>>> 38: subvolumes<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> testvol-write-behind<br>
>>>>>>>>>>>>>>>>> 39: end-volume<br>
>>>>>>>>>>>>>>>>> 40:<br>
>>>>>>>>>>>>>>>>> 41: volume<br>
>>>>>>>>>>>>>>>>> testvol-io-cache<br>
>>>>>>>>>>>>>>>>> 42: type<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> performance/io-cache<br>
>>>>>>>>>>>>>>>>> 43: subvolumes<br>
>>>>>>>>>>>>>>>>> testvol-read-ahead<br>
>>>>>>>>>>>>>>>>> 44: end-volume<br>
>>>>>>>>>>>>>>>>> 45:<br>
>>>>>>>>>>>>>>>>> 46: volume<br>
>>>>>>>>>>>>>>>>> testvol-quick-read<br>
>>>>>>>>>>>>>>>>> 47: type<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> performance/quick-read<br>
>>>>>>>>>>>>>>>>> 48: subvolumes<br>
>>>>>>>>>>>>>>>>> testvol-io-cache<br>
>>>>>>>>>>>>>>>>> 49: end-volume<br>
>>>>>>>>>>>>>>>>> 50:<br>
>>>>>>>>>>>>>>>>> 51: volume<br>
>>>>>>>>>>>>>>>>> testvol-md-cache<br>
>>>>>>>>>>>>>>>>> 52: type<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> performance/md-cache<br>
>>>>>>>>>>>>>>>>> 53: subvolumes<br>
>>>>>>>>>>>>>>>>> testvol-quick-read<br>
</div></div><span class="">>>>>>>>>>>>>>>>>> 54: end-volume<br>
>>>>>>>>>>>>>>>>> 55:<br>
>>>>>>>>>>>>>>>>> 56: volume<br>
</span><span class="">>>>>>>>>>>>>>>>>> testvol<br>
>>>>>>>>>>>>>>>>> 57: type<br>
>>>>>>>>>>>>>>>>> debug/io-stats<br>
>>>>>>>>>>>>>>>>> 58: option<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> latency-measurement<br>
>>>>>>>>>>>>>>>>> off<br>
>>>>>>>>>>>>>>>>> 59: option<br>
>>>>>>>>>>>>>>>>> count-fop-hits off<br>
>>>>>>>>>>>>>>>>> 60: subvolumes<br>
</span><span class="">>>>>>>>>>>>>>>>>> testvol-md-cache<br>
>>>>>>>>>>>>>>>>> 61: end-volume<br>
>>>>>>>>>>>>>>>>> 62:<br>
>>>>>>>>>>>>>>>>> 63: volume<br>
>>>>>>>>>>>>>>>>> meta-autoload<br>
>>>>>>>>>>>>>>>>> 64: type meta<br>
>>>>>>>>>>>>>>>>> 65: subvolumes<br>
>>>>>>>>>>>>>>>>> testvol<br>
>>>>>>>>>>>>>>>>> 66: end-volume<br>
>>>>>>>>>>>>>>>>> 67:<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> +------------------------------------------------------------------------------+<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
>>>>>>>>>>>>>>>>> 03:47:15.530005] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [rpc-clnt.c:1759:rpc_clnt_reconfig]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 2-testvol-client-1:<br>
</span><span class="">>>>>>>>>>>>>>>>>> changing port<br>
>>>>>>>>>>>>>>>>> to 49155 (from 0)<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
</span><span class="">>>>>>>>>>>>>>>>>> 03:47:15.530047] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [rpc-clnt.c:1759:rpc_clnt_reconfig]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 2-testvol-client-0:<br>
</span><span class="">>>>>>>>>>>>>>>>>> changing port<br>
>>>>>>>>>>>>>>>>> to 49155 (from 0)<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
</span><span class="">>>>>>>>>>>>>>>>>> 03:47:15.539062] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [client-handshake.c:1415:select_server_supported_programs]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 2-testvol-client-1:<br>
</span><span class="">>>>>>>>>>>>>>>>>> Using Program<br>
>>>>>>>>>>>>>>>>> GlusterFS 3.3,<br>
>>>>>>>>>>>>>>>>> Num (1298437),<br>
>>>>>>>>>>>>>>>>> Version (330)<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
</span><span class="">>>>>>>>>>>>>>>>>> 03:47:15.539299] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [client-handshake.c:1415:select_server_supported_programs]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 2-testvol-client-0:<br>
</span><span class="">>>>>>>>>>>>>>>>>> Using Program<br>
>>>>>>>>>>>>>>>>> GlusterFS 3.3,<br>
>>>>>>>>>>>>>>>>> Num (1298437),<br>
>>>>>>>>>>>>>>>>> Version (330)<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
</span><span class="">>>>>>>>>>>>>>>>>> 03:47:15.539462] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [client-handshake.c:1200:client_setvolume_cbk]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 2-testvol-client-1:<br>
</span><span class="">>>>>>>>>>>>>>>>>> Connected to<br>
>>>>>>>>>>>>>>>>> testvol-client-1,<br>
>>>>>>>>>>>>>>>>> attached to<br>
>>>>>>>>>>>>>>>>> remote volume<br>
>>>>>>>>>>>>>>>>> '/export25/brick'.<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
</span><span class="">>>>>>>>>>>>>>>>>> 03:47:15.539485] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [client-handshake.c:1212:client_setvolume_cbk]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 2-testvol-client-1:<br>
</span><span class="">>>>>>>>>>>>>>>>>> Server and<br>
>>>>>>>>>>>>>>>>> Client<br>
>>>>>>>>>>>>>>>>> lk-version<br>
>>>>>>>>>>>>>>>>> numbers are not<br>
>>>>>>>>>>>>>>>>> same, reopening<br>
>>>>>>>>>>>>>>>>> the fds<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
</span><span class="">>>>>>>>>>>>>>>>>> 03:47:15.539729] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [client-handshake.c:1200:client_setvolume_cbk]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 2-testvol-client-0:<br>
</span><span class="">>>>>>>>>>>>>>>>>> Connected to<br>
>>>>>>>>>>>>>>>>> testvol-client-0,<br>
>>>>>>>>>>>>>>>>> attached to<br>
>>>>>>>>>>>>>>>>> remote volume<br>
>>>>>>>>>>>>>>>>> '/export25/brick'.<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
</span><span class="">>>>>>>>>>>>>>>>>> 03:47:15.539751] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [client-handshake.c:1212:client_setvolume_cbk]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 2-testvol-client-0:<br>
</span><span class="">>>>>>>>>>>>>>>>>> Server and<br>
>>>>>>>>>>>>>>>>> Client<br>
>>>>>>>>>>>>>>>>> lk-version<br>
>>>>>>>>>>>>>>>>> numbers are not<br>
>>>>>>>>>>>>>>>>> same, reopening<br>
>>>>>>>>>>>>>>>>> the fds<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
</span>>>>>>>>>>>>>>>>>> 03:47:15.542878] I<br>
<span class="">>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [fuse-bridge.c:5042:fuse_graph_setup]<br>
>>>>>>>>>>>>>>>>> 0-fuse:<br>
>>>>>>>>>>>>>>>>> switched to<br>
</span><span class="">>>>>>>>>>>>>>>>>> graph 2<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
>>>>>>>>>>>>>>>>> 03:47:15.542959] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [client-handshake.c:188:client_set_lk_version_cbk]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 2-testvol-client-1:<br>
</span><span class="">>>>>>>>>>>>>>>>>> Server lk<br>
>>>>>>>>>>>>>>>>> version = 1<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
</span><span class="">>>>>>>>>>>>>>>>>> 03:47:15.542987] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [client-handshake.c:188:client_set_lk_version_cbk]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 2-testvol-client-0:<br>
</span><span class="">>>>>>>>>>>>>>>>>> Server lk<br>
>>>>>>>>>>>>>>>>> version = 1<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
</span><span class="">>>>>>>>>>>>>>>>>> 03:48:04.586291] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [client.c:2289:notify]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 0-testvol-client-0:<br>
>>>>>>>>>>>>>>>>> current graph<br>
>>>>>>>>>>>>>>>>> is no longer<br>
>>>>>>>>>>>>>>>>> active,<br>
>>>>>>>>>>>>>>>>> destroying<br>
>>>>>>>>>>>>>>>>> rpc_client<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
>>>>>>>>>>>>>>>>> 03:48:04.586360] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [client.c:2289:notify]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 0-testvol-client-1:<br>
>>>>>>>>>>>>>>>>> current graph<br>
>>>>>>>>>>>>>>>>> is no longer<br>
>>>>>>>>>>>>>>>>> active,<br>
>>>>>>>>>>>>>>>>> destroying<br>
>>>>>>>>>>>>>>>>> rpc_client<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
</span><div><div class="h5">>>>>>>>>>>>>>>>>> 03:48:04.586378] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [client.c:2215:client_rpc_notify]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 0-testvol-client-0:<br>
>>>>>>>>>>>>>>>>> disconnected<br>
>>>>>>>>>>>>>>>>> from<br>
>>>>>>>>>>>>>>>>> testvol-client-0.<br>
>>>>>>>>>>>>>>>>> Client process<br>
>>>>>>>>>>>>>>>>> will keep<br>
>>>>>>>>>>>>>>>>> trying to<br>
>>>>>>>>>>>>>>>>> connect to<br>
>>>>>>>>>>>>>>>>> glusterd until<br>
>>>>>>>>>>>>>>>>> brick's port is<br>
>>>>>>>>>>>>>>>>> available<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
>>>>>>>>>>>>>>>>> 03:48:04.586430] I<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [client.c:2215:client_rpc_notify]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 0-testvol-client-1:<br>
>>>>>>>>>>>>>>>>> disconnected<br>
>>>>>>>>>>>>>>>>> from<br>
>>>>>>>>>>>>>>>>> testvol-client-1.<br>
>>>>>>>>>>>>>>>>> Client process<br>
>>>>>>>>>>>>>>>>> will keep<br>
>>>>>>>>>>>>>>>>> trying to<br>
>>>>>>>>>>>>>>>>> connect to<br>
>>>>>>>>>>>>>>>>> glusterd until<br>
>>>>>>>>>>>>>>>>> brick's port is<br>
>>>>>>>>>>>>>>>>> available<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
>>>>>>>>>>>>>>>>> 03:48:04.589552] W<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [client-rpc-fops.c:306:client3_3_mkdir_cbk]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 2-testvol-client-0:<br>
>>>>>>>>>>>>>>>>> remote<br>
>>>>>>>>>>>>>>>>> operation<br>
>>>>>>>>>>>>>>>>> failed:<br>
>>>>>>>>>>>>>>>>> Transport<br>
>>>>>>>>>>>>>>>>> endpoint is not<br>
>>>>>>>>>>>>>>>>> connected.<br>
>>>>>>>>>>>>>>>>> Path: /test/a<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
>>>>>>>>>>>>>>>>> 03:48:04.589608] W<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [fuse-bridge.c:481:fuse_entry_cbk]<br>
>>>>>>>>>>>>>>>>> 0-glusterfs-fuse:<br>
>>>>>>>>>>>>>>>>> 78: MKDIR()<br>
>>>>>>>>>>>>>>>>> /test/a => -1<br>
>>>>>>>>>>>>>>>>> (Transport<br>
>>>>>>>>>>>>>>>>> endpoint is not<br>
>>>>>>>>>>>>>>>>> connected)<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
>>>>>>>>>>>>>>>>> 03:48:11.073349] W<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [client-rpc-fops.c:2212:client3_3_create_cbk]<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 2-testvol-client-1:<br>
>>>>>>>>>>>>>>>>> remote<br>
>>>>>>>>>>>>>>>>> operation<br>
>>>>>>>>>>>>>>>>> failed:<br>
>>>>>>>>>>>>>>>>> Transport<br>
>>>>>>>>>>>>>>>>> endpoint is not<br>
>>>>>>>>>>>>>>>>> connected.<br>
>>>>>>>>>>>>>>>>> Path: /test/f<br>
>>>>>>>>>>>>>>>>> [2015-03-20<br>
>>>>>>>>>>>>>>>>> 03:48:11.073419] W<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> [fuse-bridge.c:1937:fuse_create_cbk]<br>
>>>>>>>>>>>>>>>>> 0-glusterfs-fuse:<br>
>>>>>>>>>>>>>>>>> 82: /test/f =><br>
>>>>>>>>>>>>>>>>> -1 (Transport<br>
>>>>>>>>>>>>>>>>> endpoint is not<br>
>>>>>>>>>>>>>>>>> connected)<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> 2015-03-20<br>
</div></div>>>>>>>>>>>>>>>>>> 11:27 GMT+09:00<br>
>>>>>>>>>>>>>>>>> Vijaikumar M<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> <<a href="mailto:vmallika@redhat.com">vmallika@redhat.com</a><br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> <mailto:<a href="mailto:vmallika@redhat.com">vmallika@redhat.com</a>>>:<br>
<span class="">>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> Hi Kondo,<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> Can you<br>
>>>>>>>>>>>>>>>>> please<br>
</span><span class="">>>>>>>>>>>>>>>>>> provide all<br>
>>>>>>>>>>>>>>>>> the<br>
>>>>>>>>>>>>>>>>> glusterfs<br>
>>>>>>>>>>>>>>>>> log files?<br>
>>>>>>>>>>>>>>>>><br>
</span><span class="">>>>>>>>>>>>>>>>>> Thanks,<br>
>>>>>>>>>>>>>>>>> Vijay<br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>> On Friday<br>
>>>>>>>>>>>>>>>>> 20 March<br>
</span><span class="">>>>>>>>>>>>>>>>>> 2015 07:33<br>
>>>>>>>>>>>>>>>>> AM, K.Kondo<br>
>>>>>>>>>>>>>>>>> wrote:<br>
>>>>>>>>>>>>>>>>>> Hello,<br>
>>>>>>>>>>>>>>>>>> experts<br>
>>>>>>>>>>>>>>>>>><br>
</span><div><div class="h5">>>>>>>>>>>>>>>>>>> I had a<br>
>>>>>>>>>>>>>>>>>> trouble<br>
>>>>>>>>>>>>>>>>>> about quota.<br>
>>>>>>>>>>>>>>>>>> I set<br>
>>>>>>>>>>>>>>>>>> quota to<br>
>>>>>>>>>>>>>>>>>> one<br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>> distributed volume<br>
>>>>>>>>>>>>>>>>>> "vol12" as<br>
>>>>>>>>>>>>>>>>>> bellow.<br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>> gluster><br>
>>>>>>>>>>>>>>>>>> volume<br>
>>>>>>>>>>>>>>>>>> quota<br>
>>>>>>>>>>>>>>>>>> vol12 enable<br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>> volume<br>
>>>>>>>>>>>>>>>>>> quota :<br>
>>>>>>>>>>>>>>>>>> success<br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>> gluster><br>
>>>>>>>>>>>>>>>>>> volume<br>
>>>>>>>>>>>>>>>>>> quota<br>
>>>>>>>>>>>>>>>>>> vol12<br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>> limit-usage /test<br>
>>>>>>>>>>>>>>>>>> 10GB<br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>> volume<br>
>>>>>>>>>>>>>>>>>> quota :<br>
>>>>>>>>>>>>>>>>>> success<br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>> But I<br>
>>>>>>>>>>>>>>>>>> couldn't<br>
>>>>>>>>>>>>>>>>>> create a<br>
>>>>>>>>>>>>>>>>>> file and<br>
>>>>>>>>>>>>>>>>>> directory<br>
>>>>>>>>>>>>>>>>>> with below<br>
>>>>>>>>>>>>>>>>>> error<br>
>>>>>>>>>>>>>>>>>> message.<br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>> In a<br>
>>>>>>>>>>>>>>>>>> client host,<br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>> $cd test<br>
>>>>>>>>>>>>>>>>>> (mounted<br>
>>>>>>>>>>>>>>>>>> using fuse)<br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>> $mkdir a<br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>> mkdir:<br>
>>>>>>>>>>>>>>>>>> cannot<br>
>>>>>>>>>>>>>>>>>> create<br>
>>>>>>>>>>>>>>>>>> directory<br>
>>>>>>>>>>>>>>>>>> `a':<br>
>>>>>>>>>>>>>>>>>> Transport<br>
>>>>>>>>>>>>>>>>>> endpoint<br>
>>>>>>>>>>>>>>>>>> is not<br>
>>>>>>>>>>>>>>>>>> connected<br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>> Additionally,<br>
>>>>>>>>>>>>>>>>>> I couldn't<br>
>>>>>>>>>>>>>>>>>> check<br>
>>>>>>>>>>>>>>>>>> quota<br>
>>>>>>>>>>>>>>>>>> status<br>
>>>>>>>>>>>>>>>>>> using<br>
>>>>>>>>>>>>>>>>>> gluster<br>
>>>>>>>>>>>>>>>>>> command.<br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>> gluster><br>
>>>>>>>>>>>>>>>>>> volume<br>
>>>>>>>>>>>>>>>>>> quota<br>
>>>>>>>>>>>>>>>>>> vol12 list<br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>> Path<br>
>>>>>>>>>>>>>>>>>> Hard-limit<br>
>>>>>>>>>>>>>>>>>> Soft-limit<br>
>>>>>>>>>>>>>>>>>> Used<br>
>>>>>>>>>>>>>>>>>> Available<br>
>>>>>>>>>>>>>>>>>> Soft-limit<br>
>>>>>>>>>>>>>>>>>> exceeded?<br>
>>>>>>>>>>>>>>>>>> Hard-limit<br>
>>>>>>>>>>>>>>>>>> exceeded?<br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>> ---------------------------------------------------------------------------------------------------------------------------<br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>> Here,<br>
>>>>>>>>>>>>>>>>>> this<br>
>>>>>>>>>>>>>>>>>> command<br>
>>>>>>>>>>>>>>>>>> stops, so<br>
>>>>>>>>>>>>>>>>>> I have to<br>
>>>>>>>>>>>>>>>>>> do Ctrl-C.<br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>> Gluster<br>
>>>>>>>>>>>>>>>>>> version is<br>
>>>>>>>>>>>>>>>>>> 3.6.1 and<br>
>>>>>>>>>>>>>>>>>> 3.6.0.29<br>
>>>>>>>>>>>>>>>>>> for server<br>
>>>>>>>>>>>>>>>>>> and client<br>
>>>>>>>>>>>>>>>>>> respectively.<br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>><br>
</div></div>>>>>>>>>>>>>>>>>>> Any idea<br>
>>>>>>>>>>>>>>>>>> for this?<br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>> Best regards,<br>
>>>>>>>>>>>>>>>>>><br>
<span class="">>>>>>>>>>>>>>>>>>> K. Kondo<br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>> _______________________________________________<br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>> Gluster-users mailing list<br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
</span>>>>>>>>>>>>>>>>>>> <mailto:<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>><br>
>>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>>> <a href="http://www.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
<div class="HOEnZb"><div class="h5">>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>>>><br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>>><br>
>>>>>>>>>>>><br>
>>>>>>>>>>>><br>
>>>>>>>>>>><br>
>>>>>>>>>>><br>
>>>>>>>>>>><br>
>>>>>>>>>><br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>><br>
>>>>>>><br>
>>>>>><br>
>>>>><br>
>>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>><br>
>><br>
><br>
><br>
><br>
><br>
> _______________________________________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> <a href="http://www.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
><br>
<br>
</div></div><span class="HOEnZb"><font color="#888888">--<br>
~Atin<br>
</font></span></blockquote></div><br></div>