<div>Thanks for your anwser! I haven't foud the link file.what is the link file name or format?</div><div><br></div><div><div style="color:#909090;font-family:Arial Narrow;font-size:12px">------------------</div><div style="font-size:14px;font-family:Verdana;color:#000;"><div><b><br></b></div><b><div><b><br></b></div>------</b><div><font face="Arial Black">LiLi</font></div></div></div><div> </div><div><div><br></div><div><br></div><div style="font-size: 12px;font-family: Arial Narrow;padding:2px 0 2px 0;">------------------ Original ------------------</div><div style="font-size: 12px;background:#efefef;padding:8px;"><div><b>From: </b> "gluster-users-request";<gluster-users-request@gluster.org>;</div><div><b>Date: </b> Fri, Dec 23, 2016 08:00 PM</div><div><b>To: </b> "gluster-users"<gluster-users@gluster.org>; <wbr></div><div></div><div><b>Subject: </b> Gluster-users Digest, Vol 104, Issue 22</div></div><div><br></div>Send Gluster-users mailing list submissions to<br> gluster-users@gluster.org<br><br>To subscribe or unsubscribe via the World Wide Web, visit<br> http://www.gluster.org/mailman/listinfo/gluster-users<br>or, via email, send a message with subject or body 'help' to<br> gluster-users-request@gluster.org<br><br>You can reach the person managing the list at<br> gluster-users-owner@gluster.org<br><br>When replying, please edit your Subject line so it is more specific<br>than "Re: Contents of Gluster-users digest..."<br><br><br>Today's Topics:<br><br> 1. Re: Heal command stopped (Mohammed Rafi K C)<br> 2. Re: install Gluster 3.9 on CentOS (Grant Ridder)<br> 3. DHT DHTLINKFILE location (=?gb18030?B?wO7Bog==?=)<br> 4. Re: DHT DHTLINKFILE location (Mohammed Rafi K C)<br> 5. Re: File operation failure on simple distributed volume<br> (Mohammed Rafi K C)<br><br><br>----------------------------------------------------------------------<br><br>Message: 1<br>Date: Thu, 22 Dec 2016 18:26:34 +0530<br>From: Mohammed Rafi K C <rkavunga@redhat.com><br>To: Milo? ?u?ulovi? - MDPI <cuculovic@mdpi.com>,<br> "gluster-users@gluster.org" <gluster-users@gluster.org><br>Subject: Re: [Gluster-users] Heal command stopped<br>Message-ID: <85da4ca8-82b4-52ab-091d-3951f20983e9@redhat.com><br>Content-Type: text/plain; charset=UTF-8<br><br>Hi Milo? ?u?ulovi?<br><br>Can you please give us gluster volume info output and log files fore<br>bricks,glusterd and selfheal daemon.<br><br><br>Regards<br><br>Rafi KC<br><br><br>On 12/22/2016 03:56 PM, Milo? ?u?ulovi? - MDPI wrote:<br>> I recently added a new replica server and have now:<br>> Number of Bricks: 1 x 2 = 2<br>><br>> The heal was launched automatically and was working until yesterday<br>> (copied 5.5TB of files from total of 6.2TB). Now, the copy seems<br>> stopped, I do not see any file change on the new replica brick server.<br>><br>> When trying to add a new file to the volume and checking the physical<br>> files on the replica brick, the file is not there.<br>><br>> When I try to run a full heal with the command:<br>> sudo gluster volume heal storage full<br>><br>> I am getting:<br>><br>> Launching heal operation to perform full self heal on volume storage<br>> has been unsuccessful on bricks that are down. Please check if all<br>> brick processes are running.<br>><br>> My storage info shows both bricks there.<br>><br>> Any idea?<br>><br>><br><br><br><br>------------------------------<br><br>Message: 2<br>Date: Thu, 22 Dec 2016 16:51:48 -0800<br>From: Grant Ridder <shortdudey123@gmail.com><br>To: "Kaleb S. KEITHLEY" <kkeithle@redhat.com><br>Cc: gluster-users@gluster.org<br>Subject: Re: [Gluster-users] install Gluster 3.9 on CentOS<br>Message-ID:<br> <CAPiURgXNNasmJ3Mc2JTuCX=A74DxU5vDytM4Sr1jxwwUTCH--w@mail.gmail.com><br>Content-Type: text/plain; charset="utf-8"<br><br>Thanks for the info! Generally speaking, how long has it taken in the past<br>to be promoted to the main mirror? (i realize this might be skewed right<br>now due to the holiday season)<br><br>-Grant<br><br>On Tue, Dec 20, 2016 at 10:36 AM, Kaleb S. KEITHLEY <kkeithle@redhat.com><br>wrote:<br><br>> On 12/20/2016 12:19 PM, Grant Ridder wrote:<br>><br>>> Hi,<br>>><br>>> I am not seeing 3.9 in the Storage SIG for CentOS 6 or 7<br>>> http://mirror.centos.org/centos/7.2.1511/storage/x86_64/<br>>> http://mirror.centos.org/centos/6.8/storage/x86_64/<br>>><br>>> However, i do see it<br>>> here: http://buildlogs.centos.org/centos/7/storage/x86_64/<br>>><br>>> Is that expected?<br>>><br>><br>> Yes.<br>><br>> did the Storage SIG repo change locations?<br>>><br>><br>> No.<br>><br>> Until someone tests and gives positive feedback they remain in buildlogs.<br>><br>> Much the same way Fedora RPMs remain in Updates-Testing until they receive<br>> +3 karma (or wait for 14 days).<br>><br>> --<br>><br>> Kaleb<br>><br>><br>><br>-------------- next part --------------<br>An HTML attachment was scrubbed...<br>URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161222/85f97d63/attachment-0001.html><br><br>------------------------------<br><br>Message: 3<br>Date: Fri, 23 Dec 2016 10:56:34 +0800<br>From: "=?gb18030?B?wO7Bog==?=" <dylan-lili@foxmail.com><br>To: "=?gb18030?B?Z2x1c3Rlci11c2Vycw==?=" <gluster-users@gluster.org><br>Subject: [Gluster-users] DHT DHTLINKFILE location<br>Message-ID: <tencent_2DB5F74A2E4E0B927A76FC68@qq.com><br>Content-Type: text/plain; charset="gb18030"<br><br>In glusterfs 3.8, glusterfs creates a DHTLINKFILE file in hash volume when the volume have no size or inode over the limit.But I cann't find the DHTLINKFILE to indicate real volume .<br>Thanks!<br><br><br>------------------<br><br><br><br><br>------LiLi<br>-------------- next part --------------<br>An HTML attachment was scrubbed...<br>URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161223/45fd74ca/attachment-0001.html><br><br>------------------------------<br><br>Message: 4<br>Date: Fri, 23 Dec 2016 11:46:56 +0530<br>From: Mohammed Rafi K C <rkavunga@redhat.com><br>To: ?? <dylan-lili@foxmail.com>, gluster-users<br> <gluster-users@gluster.org><br>Subject: Re: [Gluster-users] DHT DHTLINKFILE location<br>Message-ID: <87940ba0-9590-46da-9d2f-d98957c5493c@redhat.com><br>Content-Type: text/plain; charset="utf-8"<br><br>If you are sure that the likfile has been created, then it will be in<br>hashed subvolume only. Just do a find on the file from backend and see .<br><br><br>Regards<br><br>Rafi KC<br><br>On 12/23/2016 08:26 AM, ?? wrote:<br>> In glusterfs 3.8, glusterfs creates a DHTLINKFILE file in hash volume<br>> when the volume have no size or inode over the limit.But I cann't<br>> find the DHTLINKFILE to indicate real volume .<br>> Thanks!<br>><br>> ------------------<br>> *<br>> *<br>> *<br>> *<br>> *<br>> ------*<br>> LiLi<br>> <br>><br>><br>> _______________________________________________<br>> Gluster-users mailing list<br>> Gluster-users@gluster.org<br>> http://www.gluster.org/mailman/listinfo/gluster-users<br><br>-------------- next part --------------<br>An HTML attachment was scrubbed...<br>URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161223/8e8b41bf/attachment-0001.html><br><br>------------------------------<br><br>Message: 5<br>Date: Fri, 23 Dec 2016 14:33:56 +0530<br>From: Mohammed Rafi K C <rkavunga@redhat.com><br>To: yonex <yonexyonex@icloud.com><br>Cc: gluster-users@gluster.org<br>Subject: Re: [Gluster-users] File operation failure on simple<br> distributed volume<br>Message-ID: <8fb4baca-98bd-28eb-b96c-6787be80a829@redhat.com><br>Content-Type: text/plain; charset="utf-8"<br><br>Hi Yonex,<br><br>As we discussed in irc #gluster-devel , I have attached the gdb script<br>along with this mail.<br><br>Procedure to run the gdb script.<br><br>1) Install gdb,<br><br>2) Download and install gluster debuginfo for your machine . packages<br>location --- > https://cbs.centos.org/koji/buildinfo?buildID=12757<br><br>3) find the process id and attach gdb to the process using the command<br>gdb attach <pid> -x <path_to_script><br><br>4) Continue running the script till you hit the problem<br><br>5) Stop the gdb<br><br>6) You will see a file called mylog.txt in the location where you ran<br>the gdb<br><br><br>Please keep an eye on the attached process. If you have any doubt please<br>feel free to revert me.<br><br>Regards<br><br>Rafi KC<br><br><br>On 12/19/2016 05:33 PM, Mohammed Rafi K C wrote:<br>><br>> On 12/19/2016 05:32 PM, Mohammed Rafi K C wrote:<br>>> Client 0-glusterfs01-client-2 has disconnected from bricks around<br>>> 2016-12-15 11:21:17.854249 . Can you look and/or paste the brick logs<br>>> around the time.<br>> You can find the brick name and hostname for 0-glusterfs01-client-2 from<br>> client graph.<br>><br>> Rafi<br>><br>>> Are you there in any of gluster irc channel, if so Have you got a<br>>> nickname that I can search.<br>>><br>>> Regards<br>>> Rafi KC<br>>><br>>> On 12/19/2016 04:28 PM, yonex wrote:<br>>>> Rafi,<br>>>><br>>>> OK. Thanks for your guide. I found the debug log and pasted lines around that.<br>>>> http://pastebin.com/vhHR6PQN<br>>>><br>>>> Regards<br>>>><br>>>><br>>>> 2016-12-19 14:58 GMT+09:00 Mohammed Rafi K C <rkavunga@redhat.com>:<br>>>>> On 12/16/2016 09:10 PM, yonex wrote:<br>>>>>> Rafi,<br>>>>>><br>>>>>> Thanks, the .meta feature I didn't know is very nice. I finally have<br>>>>>> captured debug logs from a client and bricks.<br>>>>>><br>>>>>> A mount log:<br>>>>>> - http://pastebin.com/Tjy7wGGj<br>>>>>><br>>>>>> FYI rickdom126 is my client's hostname.<br>>>>>><br>>>>>> Brick logs around that time:<br>>>>>> - Brick1: http://pastebin.com/qzbVRSF3<br>>>>>> - Brick2: http://pastebin.com/j3yMNhP3<br>>>>>> - Brick3: http://pastebin.com/m81mVj6L<br>>>>>> - Brick4: http://pastebin.com/JDAbChf6<br>>>>>> - Brick5: http://pastebin.com/7saP6rsm<br>>>>>><br>>>>>> However I could not find any message like "EOF on socket". I hope<br>>>>>> there is any helpful information in the logs above.<br>>>>> Indeed. I understand that the connections are in disconnected state. But<br>>>>> what particularly I'm looking for is the cause of the disconnect, Can<br>>>>> you paste the debug logs when it start disconnects, and around that. You<br>>>>> may see a debug logs that says "disconnecting now".<br>>>>><br>>>>><br>>>>> Regards<br>>>>> Rafi KC<br>>>>><br>>>>><br>>>>>> Regards.<br>>>>>><br>>>>>><br>>>>>> 2016-12-14 15:20 GMT+09:00 Mohammed Rafi K C <rkavunga@redhat.com>:<br>>>>>>> On 12/13/2016 09:56 PM, yonex wrote:<br>>>>>>>> Hi Rafi,<br>>>>>>>><br>>>>>>>> Thanks for your response. OK, I think it is possible to capture debug<br>>>>>>>> logs, since the error seems to be reproduced a few times per day. I<br>>>>>>>> will try that. However, so I want to avoid redundant debug outputs if<br>>>>>>>> possible, is there a way to enable debug log only on specific client<br>>>>>>>> nodes?<br>>>>>>> if you are using fuse mount, there is proc kind of feature called .meta<br>>>>>>> . You can set log level through that for a particular client [1] . But I<br>>>>>>> also want log from bricks because I suspect bricks process for<br>>>>>>> initiating the disconnects.<br>>>>>>><br>>>>>>><br>>>>>>> [1] eg : echo 8 > /mnt/glusterfs/.meta/logging/loglevel<br>>>>>>><br>>>>>>>> Regards<br>>>>>>>><br>>>>>>>> Yonex<br>>>>>>>><br>>>>>>>> 2016-12-13 23:33 GMT+09:00 Mohammed Rafi K C <rkavunga@redhat.com>:<br>>>>>>>>> Hi Yonex,<br>>>>>>>>><br>>>>>>>>> Is this consistently reproducible ? if so, Can you enable debug log [1]<br>>>>>>>>> and check for any message similar to [2]. Basically you can even search<br>>>>>>>>> for "EOF on socket".<br>>>>>>>>><br>>>>>>>>> You can set your log level back to default (INFO) after capturing for<br>>>>>>>>> some time.<br>>>>>>>>><br>>>>>>>>><br>>>>>>>>> [1] : gluster volume set <volname> diagnostics.brick-log-level DEBUG and<br>>>>>>>>> gluster volume set <volname> diagnostics.client-log-level DEBUG<br>>>>>>>>><br>>>>>>>>> [2] : http://pastebin.com/xn8QHXWa<br>>>>>>>>><br>>>>>>>>><br>>>>>>>>> Regards<br>>>>>>>>><br>>>>>>>>> Rafi KC<br>>>>>>>>><br>>>>>>>>> On 12/12/2016 09:35 PM, yonex wrote:<br>>>>>>>>>> Hi,<br>>>>>>>>>><br>>>>>>>>>> When my application moves a file from it's local disk to FUSE-mounted<br>>>>>>>>>> GlusterFS volume, the client outputs many warnings and errors not<br>>>>>>>>>> always but occasionally. The volume is a simple distributed volume.<br>>>>>>>>>><br>>>>>>>>>> A sample of logs pasted: http://pastebin.com/axkTCRJX<br>>>>>>>>>><br>>>>>>>>>> It seems to come from something like a network disconnection<br>>>>>>>>>> ("Transport endpoint is not connected") at a glance, but other<br>>>>>>>>>> networking applications on the same machine don't observe such a<br>>>>>>>>>> thing. So I guess there may be a problem somewhere in GlusterFS stack.<br>>>>>>>>>><br>>>>>>>>>> It ended in failing to rename a file, logging PHP Warning like below:<br>>>>>>>>>><br>>>>>>>>>> PHP Warning: rename(/glusterfs01/db1/stack/f0/13a9a2f0): failed<br>>>>>>>>>> to open stream: Input/output error in [snipped].php on line 278<br>>>>>>>>>> PHP Warning:<br>>>>>>>>>> rename(/var/stack/13a9a2f0,/glusterfs01/db1/stack/f0/13a9a2f0):<br>>>>>>>>>> Input/output error in [snipped].php on line 278<br>>>>>>>>>><br>>>>>>>>>> Conditions:<br>>>>>>>>>><br>>>>>>>>>> - GlusterFS 3.8.5 installed via yum CentOS-Gluster-3.8.repo<br>>>>>>>>>> - Volume info and status pasted: http://pastebin.com/JPt2KeD8<br>>>>>>>>>> - Client machines' OS: Scientific Linux 6 or CentOS 6.<br>>>>>>>>>> - Server machines' OS: CentOS 6.<br>>>>>>>>>> - Kernel version is 2.6.32-642.6.2.el6.x86_64 on all machines.<br>>>>>>>>>> - The number of connected FUSE clients is 260.<br>>>>>>>>>> - No firewall between connected machines.<br>>>>>>>>>> - Neither remounting volumes nor rebooting client machines take effect.<br>>>>>>>>>> - It is caused by not only rename() but also copy() and filesize() operation.<br>>>>>>>>>> - No outputs in brick logs when it happens.<br>>>>>>>>>><br>>>>>>>>>> Any ideas? I'd appreciate any help.<br>>>>>>>>>><br>>>>>>>>>> Regards.<br>>>>>>>>>> _______________________________________________<br>>>>>>>>>> Gluster-users mailing list<br>>>>>>>>>> Gluster-users@gluster.org<br>>>>>>>>>> http://www.gluster.org/mailman/listinfo/gluster-users<br><br>-------------- next part --------------<br>set pagination off<br>set logging file mylog.txt<br>set logging on<br>handle SIGPIPE nostop<br>b socket.c:596<br>commands 1<br> shell date -u<br> p priv->incoming.ra_read<br> p priv->incoming.ra_max<br> p priv->incoming.ra_served<br> p priv->incoming.record_state<br> p priv->sock<br> bt<br> continue<br>end<br>b socket.c:2108<br>commands 2<br> shell date -u<br> p in->total_bytes_read<br> p in->msg_type<br> bt<br> continue<br>end<br>b socket.c:2142 if ret < 0<br>commands 3<br> shell date -u<br> p frag->bytes_read<br> p ret<br> bt<br> continue<br>end<br>b socket.c:1011 if size >= 1073741824ULL<br>commands 4<br> shell date -u<br> p size<br> p iov_length (msg->rpchdr, msg->rpchdrcount)<br> p iov_length (msg->proghdr, msg->proghdrcount)<br> p iov_length (msg->progpayload, msg->progpayloadcount)<br> bt<br> continue<br>end<br>continue<br><br>------------------------------<br><br>_______________________________________________<br>Gluster-users mailing list<br>Gluster-users@gluster.org<br>http://www.gluster.org/mailman/listinfo/gluster-users<br><br>End of Gluster-users Digest, Vol 104, Issue 22<br>**********************************************<br></div>