<div dir="ltr"><div>I am talking about the time taken by the GlusterD to mark the process offline because <br></div><div>here GlusterD is responsible to making brick online/offline.<br></div><div><br></div>is it configurable?<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, May 4, 2016 at 5:53 PM, Atin Mukherjee <span dir="ltr">&lt;<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Abhishek,<br>
<br>
See the response inline.<br>
<span class=""><br>
<br>
On 05/04/2016 05:43 PM, ABHISHEK PALIWAL wrote:<br>
&gt; Hi Atin,<br>
&gt;<br>
&gt; please reply, is there any configurable time out parameter for brick<br>
&gt; process to go offline which we can increase?<br>
&gt;<br>
&gt; Regards,<br>
&gt; Abhishek<br>
&gt;<br>
&gt; On Thu, Apr 21, 2016 at 12:34 PM, ABHISHEK PALIWAL<br>
</span><span class="">&gt; &lt;<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a> &lt;mailto:<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>&gt;&gt; wrote:<br>
&gt;<br>
&gt;     Hi Atin,<br>
&gt;<br>
&gt;     Please answer following doubts as well:<br>
&gt;<br>
&gt;     1 .If there is a temporary glitch in the network , will that affect<br>
&gt;     the gluster brick process in anyway, Is there any timeout for the<br>
&gt;     brick process to go offline in case of the glitch in the network.<br>
</span>      If there is disconnection, GlusterD will receive it and mark the<br>
brick as disconnected even if the brick process is online. So answer to<br>
this question is both yes and no. From process perspective they are<br>
still up but not to the other components/layers and that may impact the<br>
operations (both mgmt &amp; I/O given there is a disconnect between client<br>
and brick processes too)<br>
<span class="">&gt;<br>
&gt;     2. Is there is any configurable time out parameter which we can<br>
&gt;     increase ?<br>
</span>I don&#39;t get this question. What time out are you talking about?<br>
<span class="">&gt;<br>
&gt;     3.Brick and glusterd connected by unix domain socket.It is just a<br>
&gt;     local socket then why it is disconnect in below logs:<br>
</span>      This is not true, its over TCP socket.<br>
<span class="im HOEnZb">&gt;<br>
&gt;      1667 [2016-04-03 10:12:32.984331] I [MSGID: 106005]<br>
&gt;     [glusterd-handler.c:4908:__glusterd_brick_rpc_notify] 0-management:<br>
&gt;     Brick 10.32.       1.144:/opt/lvmdir/c2/brick has disconnected from<br>
&gt;     glusterd.<br>
&gt;      1668 [2016-04-03 10:12:32.984366] D [MSGID: 0]<br>
&gt;     [glusterd-utils.c:4872:glusterd_set_brick_status] 0-glusterd: Setting<br>
&gt;     brick 10.32.1.        144:/opt/lvmdir/c2/brick status to stopped<br>
&gt;<br>
&gt;     Regards,<br>
&gt;     Abhishek<br>
&gt;<br>
&gt;<br>
&gt;     On Tue, Apr 19, 2016 at 1:12 PM, ABHISHEK PALIWAL<br>
</span><span class="im HOEnZb">&gt;     &lt;<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a> &lt;mailto:<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>&gt;&gt; wrote:<br>
&gt;<br>
&gt;         Hi Atin,<br>
&gt;<br>
&gt;         Thanks.<br>
&gt;<br>
&gt;         Have more doubts here.<br>
&gt;<br>
&gt;         Brick and glusterd connected by unix domain socket.It is just a<br>
&gt;         local socket then why it is disconnect in below logs:<br>
&gt;<br>
&gt;          1667 [2016-04-03 10:12:32.984331] I [MSGID: 106005]<br>
&gt;         [glusterd-handler.c:4908:__glusterd_brick_rpc_notify] 0-management:<br>
&gt;         Brick 10.32.       1.144:/opt/lvmdir/c2/brick has disconnected from<br>
&gt;         glusterd.<br>
&gt;          1668 [2016-04-03 10:12:32.984366] D [MSGID: 0]<br>
&gt;         [glusterd-utils.c:4872:glusterd_set_brick_status] 0-glusterd:<br>
&gt;         Setting<br>
&gt;         brick 10.32.1.        144:/opt/lvmdir/c2/brick status to stopped<br>
&gt;<br>
&gt;<br>
&gt;         Regards,<br>
&gt;         Abhishek<br>
&gt;<br>
&gt;<br>
&gt;         On Fri, Apr 15, 2016 at 9:14 AM, Atin Mukherjee<br>
</span><span class="im HOEnZb">&gt;         &lt;<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt;&gt; wrote:<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;             On 04/14/2016 04:07 PM, ABHISHEK PALIWAL wrote:<br>
&gt;             &gt;<br>
&gt;             &gt;<br>
&gt;             &gt; On Thu, Apr 14, 2016 at 2:33 PM, Atin Mukherjee &lt;<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt;<br>
</span><div class="HOEnZb"><div class="h5">&gt;             &gt; &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt;&gt;&gt; wrote:<br>
&gt;             &gt;<br>
&gt;             &gt;<br>
&gt;             &gt;<br>
&gt;             &gt;     On 04/05/2016 03:35 PM, ABHISHEK PALIWAL wrote:<br>
&gt;             &gt;     &gt;<br>
&gt;             &gt;     &gt;<br>
&gt;             &gt;     &gt; On Tue, Apr 5, 2016 at 2:22 PM, Atin Mukherjee &lt;<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt;<br>
&gt;             &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a> &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt;&gt;<br>
&gt;             &gt;     &gt; &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
&gt;             &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt; &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a><br>
&gt;             &lt;mailto:<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>&gt;&gt;&gt;&gt; wrote:<br>
&gt;             &gt;     &gt;<br>
&gt;             &gt;     &gt;<br>
&gt;             &gt;     &gt;<br>
&gt;             &gt;     &gt;     On 04/05/2016 01:04 PM, ABHISHEK PALIWAL wrote:<br>
&gt;             &gt;     &gt;     &gt; Hi Team,<br>
&gt;             &gt;     &gt;     &gt;<br>
&gt;             &gt;     &gt;     &gt; We are using Gluster 3.7.6 and facing one<br>
&gt;             problem in which<br>
&gt;             &gt;     brick is not<br>
&gt;             &gt;     &gt;     &gt; comming online after restart the board.<br>
&gt;             &gt;     &gt;     &gt;<br>
&gt;             &gt;     &gt;     &gt; To understand our setup, please look the<br>
&gt;             following steps:<br>
&gt;             &gt;     &gt;     &gt; 1. We have two boards A and B on which Gluster<br>
&gt;             volume is<br>
&gt;             &gt;     running in<br>
&gt;             &gt;     &gt;     &gt; replicated mode having one brick on each board.<br>
&gt;             &gt;     &gt;     &gt; 2. Gluster mount point is present on the Board<br>
&gt;             A which is<br>
&gt;             &gt;     sharable<br>
&gt;             &gt;     &gt;     &gt; between number of processes.<br>
&gt;             &gt;     &gt;     &gt; 3. Till now our volume is in sync and<br>
&gt;             everthing is working fine.<br>
&gt;             &gt;     &gt;     &gt; 4. Now we have test case in which we&#39;ll stop<br>
&gt;             the glusterd,<br>
&gt;             &gt;     reboot the<br>
&gt;             &gt;     &gt;     &gt; Board B and when this board comes up, starts<br>
&gt;             the glusterd<br>
&gt;             &gt;     again on it.<br>
&gt;             &gt;     &gt;     &gt; 5. We repeated Steps 4 multiple times to check the<br>
&gt;             &gt;     reliability of system.<br>
&gt;             &gt;     &gt;     &gt; 6. After the Step 4, sometimes system comes in<br>
&gt;             working state<br>
&gt;             &gt;     (i.e. in<br>
&gt;             &gt;     &gt;     &gt; sync) but sometime we faces that brick of<br>
&gt;             Board B is present in<br>
&gt;             &gt;     &gt;     &gt;     “gluster volume status” command but not be<br>
&gt;             online even<br>
&gt;             &gt;     waiting for<br>
&gt;             &gt;     &gt;     &gt; more than a minute.<br>
&gt;             &gt;     &gt;     As I mentioned in another email thread until and<br>
&gt;             unless the<br>
&gt;             &gt;     log shows<br>
&gt;             &gt;     &gt;     the evidence that there was a reboot nothing can<br>
&gt;             be concluded.<br>
&gt;             &gt;     The last<br>
&gt;             &gt;     &gt;     log what you shared with us few days back didn&#39;t<br>
&gt;             give any<br>
&gt;             &gt;     indication<br>
&gt;             &gt;     &gt;     that brick process wasn&#39;t running.<br>
&gt;             &gt;     &gt;<br>
&gt;             &gt;     &gt; How can we identify that the brick process is<br>
&gt;             running in brick logs?<br>
&gt;             &gt;     &gt;<br>
&gt;             &gt;     &gt;     &gt; 7. When the Step 4 is executing at the same<br>
&gt;             time on Board A some<br>
&gt;             &gt;     &gt;     &gt; processes are started accessing the files from<br>
&gt;             the Gluster<br>
&gt;             &gt;     mount point.<br>
&gt;             &gt;     &gt;     &gt;<br>
&gt;             &gt;     &gt;     &gt; As a solution to make this brick online, we<br>
&gt;             found some<br>
&gt;             &gt;     existing issues<br>
&gt;             &gt;     &gt;     &gt; in gluster mailing list giving suggestion to<br>
&gt;             use “gluster<br>
&gt;             &gt;     volume start<br>
&gt;             &gt;     &gt;     &gt; &lt;vol_name&gt; force” to make the brick &#39;offline&#39;<br>
&gt;             to &#39;online&#39;.<br>
&gt;             &gt;     &gt;     &gt;<br>
&gt;             &gt;     &gt;     &gt; If we use “gluster volume start &lt;vol_name&gt;<br>
&gt;             force” command.<br>
&gt;             &gt;     It will kill<br>
&gt;             &gt;     &gt;     &gt; the existing volume process and started the<br>
&gt;             new process then<br>
&gt;             &gt;     what will<br>
&gt;             &gt;     &gt;     &gt; happen if other processes are accessing the<br>
&gt;             same volume at<br>
&gt;             &gt;     the time when<br>
&gt;             &gt;     &gt;     &gt; volume process is killed by this command<br>
&gt;             internally. Will it<br>
&gt;             &gt;     impact any<br>
&gt;             &gt;     &gt;     &gt; failure on these processes?<br>
&gt;             &gt;     &gt;     This is not true, volume start force will start<br>
&gt;             the brick<br>
&gt;             &gt;     processes only<br>
&gt;             &gt;     &gt;     if they are not running. Running brick processes<br>
&gt;             will not be<br>
&gt;             &gt;     &gt;     interrupted.<br>
&gt;             &gt;     &gt;<br>
&gt;             &gt;     &gt; we have tried and check the pid of process before<br>
&gt;             force start and<br>
&gt;             &gt;     after<br>
&gt;             &gt;     &gt; force start.<br>
&gt;             &gt;     &gt; the pid has been changed after force start.<br>
&gt;             &gt;     &gt;<br>
&gt;             &gt;     &gt; Please find the logs at the time of failure attached<br>
&gt;             once again with<br>
&gt;             &gt;     &gt; log-level=debug.<br>
&gt;             &gt;     &gt;<br>
&gt;             &gt;     &gt; if you can give me the exact line where you are able<br>
&gt;             to find out that<br>
&gt;             &gt;     &gt; the brick process<br>
&gt;             &gt;     &gt; is running in brick log file please give me the line<br>
&gt;             number of<br>
&gt;             &gt;     that file.<br>
&gt;             &gt;<br>
&gt;             &gt;     Here is the sequence at which glusterd and respective<br>
&gt;             brick process is<br>
&gt;             &gt;     restarted.<br>
&gt;             &gt;<br>
&gt;             &gt;     1. glusterd restart trigger - line number 1014 in<br>
&gt;             glusterd.log file:<br>
&gt;             &gt;<br>
&gt;             &gt;     [2016-04-03 10:12:29.051735] I [MSGID: 100030]<br>
&gt;             [glusterfsd.c:2318:main]<br>
&gt;             &gt;     0-/usr/sbin/glusterd: Started running /usr/sbin/<br>
&gt;                     glusterd<br>
&gt;             &gt;     version 3.7.6 (args: /usr/sbin/glusterd -p<br>
&gt;             /var/run/glusterd.pid<br>
&gt;             &gt;     --log-level DEBUG)<br>
&gt;             &gt;<br>
&gt;             &gt;     2. brick start trigger - line number 190 in<br>
&gt;             opt-lvmdir-c2-brick.log<br>
&gt;             &gt;<br>
&gt;             &gt;     [2016-04-03 10:14:25.268833] I [MSGID: 100030]<br>
&gt;             [glusterfsd.c:2318:main]<br>
&gt;             &gt;     0-/usr/sbin/glusterfsd: Started running /usr/sbin/<br>
&gt;                     glusterfsd<br>
&gt;             &gt;     version 3.7.6 (args: /usr/sbin/glusterfsd -s<br>
&gt;             10.32.1.144 --volfile-id<br>
&gt;             &gt;     c_glusterfs.10.32.1.144.opt-lvmdir-c2-brick -p /<br>
&gt;             &gt;<br>
&gt;              system/glusterd/vols/c_glusterfs/run/10.32.1.144-opt-lvmdir-c2-brick.pid<br>
&gt;             &gt;     -S /var/run/gluster/697c0e4a16ebc734cd06fd9150723005.<br>
&gt;                   socket<br>
&gt;             &gt;     --brick-name /opt/lvmdir/c2/brick -l<br>
&gt;             &gt;     /var/log/glusterfs/bricks/opt-lvmdir-c2-brick.log<br>
&gt;             --xlator-option<br>
&gt;             &gt;     *-posix.glusterd-<br>
&gt;              uuid=2d576ff8-0cea-4f75-9e34-a5674fbf7256<br>
&gt;             &gt;     --brick-port 49329 --xlator-option<br>
&gt;             c_glusterfs-server.listen-port=49329)<br>
&gt;             &gt;<br>
&gt;             &gt;     3. The following log indicates that brick is up and is<br>
&gt;             now started.<br>
&gt;             &gt;     Refer to line 16123 in glusterd.log<br>
&gt;             &gt;<br>
&gt;             &gt;     [2016-04-03 10:14:25.336855] D [MSGID: 0]<br>
&gt;             &gt;     [glusterd-handler.c:4897:__glusterd_brick_rpc_notify]<br>
&gt;             0-management:<br>
&gt;             &gt;     Connected to 10.32.1.144:/opt/lvmdir/c2/brick<br>
&gt;             &gt;<br>
&gt;             &gt;     This clearly indicates that the brick is up and<br>
&gt;             running as after that I<br>
&gt;             &gt;     do not see any disconnect event been processed by<br>
&gt;             glusterd for the brick<br>
&gt;             &gt;     process.<br>
&gt;             &gt;<br>
&gt;             &gt;<br>
&gt;             &gt; Thanks for replying descriptively but please also clear<br>
&gt;             some more doubts:<br>
&gt;             &gt;<br>
&gt;             &gt; 1. At this 10:14:25 moment of time brick is available<br>
&gt;             because we have<br>
&gt;             &gt; removed brick and added it again to make it online:<br>
&gt;             &gt; following are the logs from cmd-history.log file of 000300<br>
&gt;             &gt;<br>
&gt;             &gt; [2016-04-03 10:14:21.446570]  : volume status : SUCCESS<br>
&gt;             &gt; [2016-04-03 10:14:21.665889]  : volume remove-brick<br>
&gt;             c_glusterfs replica<br>
&gt;             &gt; 1 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS<br>
&gt;             &gt; [2016-04-03 10:14:21.764270]  : peer detach 10.32.1.144 :<br>
&gt;             SUCCESS<br>
&gt;             &gt; [2016-04-03 10:14:23.060442]  : peer probe 10.32.1.144 :<br>
&gt;             SUCCESS<br>
&gt;             &gt; [2016-04-03 10:14:25.649525]  : volume add-brick<br>
&gt;             c_glusterfs replica 2<br>
&gt;             &gt; 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS<br>
&gt;             &gt;<br>
&gt;             &gt; and also 10:12:29 was the last reboot time before this<br>
&gt;             failure. So I am<br>
&gt;             &gt; totally agree what you said earlier.<br>
&gt;             &gt;<br>
&gt;             &gt; 2 .As you said at 10:12:29 glusterd restarted then why we<br>
&gt;             are not<br>
&gt;             &gt; getting &#39;brick start trigger&#39; related logs<br>
&gt;             &gt;  like below between 10:12:29 to 10:14:25 time stamp which<br>
&gt;             is something<br>
&gt;             &gt; two minute of time interval.<br>
&gt;             So here is the culprit:<br>
&gt;<br>
&gt;              1667 [2016-04-03 10:12:32.984331] I [MSGID: 106005]<br>
&gt;             [glusterd-handler.c:4908:__glusterd_brick_rpc_notify]<br>
&gt;             0-management:<br>
&gt;             Brick 10.32.       1.144:/opt/lvmdir/c2/brick has<br>
&gt;             disconnected from<br>
&gt;             glusterd.<br>
&gt;              1668 [2016-04-03 10:12:32.984366] D [MSGID: 0]<br>
&gt;             [glusterd-utils.c:4872:glusterd_set_brick_status]<br>
&gt;             0-glusterd: Setting<br>
&gt;             brick 10.32.1.        144:/opt/lvmdir/c2/brick status to stopped<br>
&gt;<br>
&gt;<br>
&gt;             GlusterD received a disconnect event for this brick process<br>
&gt;             and mark it<br>
&gt;             as stopped. This could happen due to two reasons. 1. brick<br>
&gt;             process goes<br>
&gt;             down or 2. Network issue. In this case its the later I<br>
&gt;             believe since the<br>
&gt;             brick process was running at that time. I&#39;d request you to<br>
&gt;             check this<br>
&gt;             from the N/W side.<br>
&gt;<br>
&gt;<br>
&gt;             &gt;<br>
&gt;             &gt; [2016-04-03 10:14:25.268833] I [MSGID: 100030]<br>
&gt;             [glusterfsd.c:2318:main]<br>
&gt;             &gt; 0-/usr/sbin/glusterfsd: Started running /usr/sbin/<br>
&gt;                 glusterfsd<br>
&gt;             &gt; version 3.7.6 (args: /usr/sbin/glusterfsd -s 10.32.1.144<br>
&gt;             --volfile-id<br>
&gt;             &gt; c_glusterfs.10.32.1.144.opt-lvmdir-c2-brick -p /<br>
&gt;             &gt;<br>
&gt;             system/glusterd/vols/c_glusterfs/run/10.32.1.144-opt-lvmdir-c2-brick.pid<br>
&gt;             &gt; -S /var/run/gluster/697c0e4a16ebc734cd06fd9150723005.<br>
&gt;               socket<br>
&gt;             &gt; --brick-name /opt/lvmdir/c2/brick -l<br>
&gt;             &gt; /var/log/glusterfs/bricks/opt-lvmdir-c2-brick.log<br>
&gt;             --xlator-option<br>
&gt;             &gt; *-posix.glusterd-<br>
&gt;              uuid=2d576ff8-0cea-4f75-9e34-a5674fbf7256<br>
&gt;             &gt; --brick-port 49329 --xlator-option<br>
&gt;             c_glusterfs-server.listen-port=49329)<br>
&gt;             &gt;<br>
&gt;             &gt; 3. We are continuously checking brick status in the above<br>
&gt;             time duration<br>
&gt;             &gt; using  &quot;gluster volume status&quot; refer the cmd-history.log<br>
&gt;             file from 000300<br>
&gt;             &gt;<br>
&gt;             &gt; In glusterd.log file we are also getting below logs<br>
&gt;             &gt;<br>
&gt;             &gt; [2016-04-03 10:12:31.771051] D [MSGID: 0]<br>
&gt;             &gt; [glusterd-handler.c:4897:__glusterd_brick_rpc_notify]<br>
&gt;             0-management:<br>
&gt;             &gt; Connected to 10.32.1.144:/opt/lvmdir/c2/brick<br>
&gt;             &gt;<br>
&gt;             &gt; [2016-04-03 10:12:32.981152] D [MSGID: 0]<br>
&gt;             &gt; [glusterd-handler.c:4897:__glusterd_brick_rpc_notify]<br>
&gt;             0-management:<br>
&gt;             &gt; Connected to 10.32.1.144:/opt/lvmdir/c2/brick<br>
&gt;             &gt;<br>
&gt;             &gt; two times b/w 10:12:29 and 10:14:25 and as you said these<br>
&gt;             logs  &quot;<br>
&gt;             &gt; clearly indicates that the brick is up and running as<br>
&gt;             after&quot; then why<br>
&gt;             &gt; brick is not online in &quot;gluster volume status&quot; command<br>
&gt;             &gt;<br>
&gt;             &gt; [2016-04-03 10:12:33.990487]  : volume status : SUCCESS<br>
&gt;             &gt; [2016-04-03 10:12:34.007469]  : volume status : SUCCESS<br>
&gt;             &gt; [2016-04-03 10:12:35.095918]  : volume status : SUCCESS<br>
&gt;             &gt; [2016-04-03 10:12:35.126369]  : volume status : SUCCESS<br>
&gt;             &gt; [2016-04-03 10:12:36.224018]  : volume status : SUCCESS<br>
&gt;             &gt; [2016-04-03 10:12:36.251032]  : volume status : SUCCESS<br>
&gt;             &gt; [2016-04-03 10:12:37.352377]  : volume status : SUCCESS<br>
&gt;             &gt; [2016-04-03 10:12:37.374028]  : volume status : SUCCESS<br>
&gt;             &gt; [2016-04-03 10:12:38.446148]  : volume status : SUCCESS<br>
&gt;             &gt; [2016-04-03 10:12:38.468860]  : volume status : SUCCESS<br>
&gt;             &gt; [2016-04-03 10:12:39.534017]  : volume status : SUCCESS<br>
&gt;             &gt; [2016-04-03 10:12:39.553711]  : volume status : SUCCESS<br>
&gt;             &gt; [2016-04-03 10:12:40.616610]  : volume status : SUCCESS<br>
&gt;             &gt; [2016-04-03 10:12:40.636354]  : volume status : SUCCESS<br>
&gt;             &gt; ......<br>
&gt;             &gt; ......<br>
&gt;             &gt; ......<br>
&gt;             &gt; [2016-04-03 10:14:21.446570]  : volume status : SUCCESS<br>
&gt;             &gt; [2016-04-03 10:14:21.665889]  : volume remove-brick<br>
&gt;             c_glusterfs replica<br>
&gt;             &gt; 1 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS<br>
&gt;             &gt; [2016-04-03 10:14:21.764270]  : peer detach 10.32.1.144 :<br>
&gt;             SUCCESS<br>
&gt;             &gt; [2016-04-03 10:14:23.060442]  : peer probe 10.32.1.144 :<br>
&gt;             SUCCESS<br>
&gt;             &gt; [2016-04-03 10:14:25.649525]  : volume add-brick<br>
&gt;             c_glusterfs replica 2<br>
&gt;             &gt; 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS<br>
&gt;             &gt;<br>
&gt;             &gt; In above logs we are continuously checking brick status<br>
&gt;             but when we<br>
&gt;             &gt; don&#39;t find brick status &#39;online&#39; even after ~2 minutes<br>
&gt;             then we removed<br>
&gt;             &gt; it and add it again to make it online.<br>
&gt;             &gt;<br>
&gt;             &gt; [2016-04-03 10:14:21.665889]  : volume remove-brick<br>
&gt;             c_glusterfs replica<br>
&gt;             &gt; 1 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS<br>
&gt;             &gt; [2016-04-03 10:14:21.764270]  : peer detach 10.32.1.144 :<br>
&gt;             SUCCESS<br>
&gt;             &gt; [2016-04-03 10:14:23.060442]  : peer probe 10.32.1.144 :<br>
&gt;             SUCCESS<br>
&gt;             &gt; [2016-04-03 10:14:25.649525]  : volume add-brick<br>
&gt;             c_glusterfs replica 2<br>
&gt;             &gt; 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS<br>
&gt;             &gt;<br>
&gt;             &gt; that is why in logs we are gettting &quot;brick start trigger<br>
&gt;             logs&quot; at time<br>
&gt;             &gt; stamp 10:14:25<br>
&gt;             &gt;<br>
&gt;             &gt; [2016-04-03 10:14:25.268833] I [MSGID: 100030]<br>
&gt;             [glusterfsd.c:2318:main]<br>
&gt;             &gt; 0-/usr/sbin/glusterfsd: Started running /usr/sbin/<br>
&gt;                 glusterfsd<br>
&gt;             &gt; version 3.7.6 (args: /usr/sbin/glusterfsd -s 10.32.1.144<br>
&gt;             --volfile-id<br>
&gt;             &gt; c_glusterfs.10.32.1.144.opt-lvmdir-c2-brick -p /<br>
&gt;             &gt;<br>
&gt;             system/glusterd/vols/c_glusterfs/run/10.32.1.144-opt-lvmdir-c2-brick.pid<br>
&gt;             &gt; -S /var/run/gluster/697c0e4a16ebc734cd06fd9150723005.<br>
&gt;               socket<br>
&gt;             &gt; --brick-name /opt/lvmdir/c2/brick -l<br>
&gt;             &gt; /var/log/glusterfs/bricks/opt-lvmdir-c2-brick.log<br>
&gt;             --xlator-option<br>
&gt;             &gt; *-posix.glusterd-<br>
&gt;              uuid=2d576ff8-0cea-4f75-9e34-a5674fbf7256<br>
&gt;             &gt; --brick-port 49329 --xlator-option<br>
&gt;             c_glusterfs-server.listen-port=49329)<br>
&gt;             &gt;<br>
&gt;             &gt;<br>
&gt;             &gt; Regards,<br>
&gt;             &gt; Abhishek<br>
&gt;             &gt;<br>
&gt;             &gt;<br>
&gt;             &gt;     Please note that all the logs referred and pasted are<br>
&gt;             from 002500.<br>
&gt;             &gt;<br>
&gt;             &gt;     ~Atin<br>
&gt;             &gt;     &gt;<br>
&gt;             &gt;     &gt; 002500 - Board B that brick is offline<br>
&gt;             &gt;     &gt; 00300 - Board A logs<br>
&gt;             &gt;     &gt;<br>
&gt;             &gt;     &gt;     &gt;<br>
&gt;             &gt;     &gt;     &gt; *Question : What could be contributing to<br>
&gt;             brick offline?*<br>
&gt;             &gt;     &gt;     &gt;<br>
&gt;             &gt;     &gt;     &gt;<br>
&gt;             &gt;     &gt;     &gt; --<br>
&gt;             &gt;     &gt;     &gt;<br>
&gt;             &gt;     &gt;     &gt; Regards<br>
&gt;             &gt;     &gt;     &gt; Abhishek Paliwal<br>
&gt;             &gt;     &gt;     &gt;<br>
&gt;             &gt;     &gt;     &gt;<br>
&gt;             &gt;     &gt;     &gt; _______________________________________________<br>
&gt;             &gt;     &gt;     &gt; Gluster-devel mailing list<br>
&gt;             &gt;     &gt;     &gt; <a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
&gt;             &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>&gt;<br>
&gt;             &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
&gt;             &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>&gt;&gt;<br>
&gt;             &gt;     &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
&gt;             &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>&gt;<br>
&gt;             &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
&gt;             &lt;mailto:<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a>&gt;&gt;&gt;<br>
&gt;             &gt;     &gt;     &gt;<br>
&gt;             <a href="http://www.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-devel</a><br>
&gt;             &gt;     &gt;     &gt;<br>
&gt;             &gt;     &gt;<br>
&gt;             &gt;     &gt;<br>
&gt;             &gt;     &gt;<br>
&gt;             &gt;     &gt;<br>
&gt;             &gt;<br>
&gt;             &gt;<br>
&gt;             &gt;<br>
&gt;             &gt;<br>
&gt;             &gt; --<br>
&gt;             &gt;<br>
&gt;             &gt;<br>
&gt;             &gt;<br>
&gt;             &gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; --<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; Regards<br>
&gt; Abhishek Paliwal<br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature"><div dir="ltr"><br><br><br><br>Regards<br>
Abhishek Paliwal<br>
</div></div>
</div>