<div dir="ltr">Why don't you share the glusterd log file?<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Jul 5, 2016 at 12:53 PM, Atul Yadav <span dir="ltr"><<a href="mailto:atulyadavtech@gmail.com" target="_blank">atulyadavtech@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi ,<div><br></div><div>After restarting the service. service entered in to the fail state.</div><div><div>[root@master1 ~]# /etc/init.d/glusterd restart</div><div>Stopping glusterd: [FAILED]</div><div>Starting glusterd: [FAILED]</div></div><div><br></div><div>Note: This behavior only happening over rdma network. But with ethernet there is no issue.</div><div><br></div><div>Thank you</div><span class="HOEnZb"><font color="#888888"><div>Atul Yadav</div><div><br></div><div><br></div></font></span></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Jul 5, 2016 at 11:28 AM, Atin Mukherjee <span dir="ltr"><<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><div><div>On Tue, Jul 5, 2016 at 11:01 AM, Atul Yadav <span dir="ltr"><<a href="mailto:atulyadavtech@gmail.com" target="_blank">atulyadavtech@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi All,<div><br></div><div>The glusterfs environment details are given below:-</div><div><br></div><div><div>[root@master1 ~]# cat /etc/redhat-release</div><div>CentOS release 6.7 (Final)</div><div>[root@master1 ~]# uname -r</div><div>2.6.32-642.1.1.el6.x86_64</div><div>[root@master1 ~]# rpm -qa | grep -i gluster</div><div>glusterfs-rdma-3.8rc2-1.el6.x86_64</div><div>glusterfs-api-3.8rc2-1.el6.x86_64</div><div>glusterfs-3.8rc2-1.el6.x86_64</div><div>glusterfs-cli-3.8rc2-1.el6.x86_64</div><div>glusterfs-client-xlators-3.8rc2-1.el6.x86_64</div><div>glusterfs-server-3.8rc2-1.el6.x86_64</div><div>glusterfs-fuse-3.8rc2-1.el6.x86_64</div><div>glusterfs-libs-3.8rc2-1.el6.x86_64</div><div>[root@master1 ~]#</div></div><div><br></div><div><div>Volume Name: home</div><div>Type: Replicate</div><div>Volume ID: 2403ddf9-c2e0-4930-bc94-734772ef099f</div><div>Status: Stopped</div><div>Number of Bricks: 1 x 2 = 2</div><div>Transport-type: rdma</div><div>Bricks:</div><div>Brick1: master1-ib.dbt.au:/glusterfs/home/brick1</div><div>Brick2: master2-ib.dbt.au:/glusterfs/home/brick2</div><div>Options Reconfigured:</div><div>network.ping-timeout: 20</div><div>nfs.disable: on</div><div>performance.readdir-ahead: on</div><div>transport.address-family: inet</div><div>config.transport: rdma</div><div>cluster.server-quorum-type: server</div><div>cluster.quorum-type: fixed</div><div>cluster.quorum-count: 1</div><div>locks.mandatory-locking: off</div><div>cluster.enable-shared-storage: disable</div><div>cluster.server-quorum-ratio: 51%</div></div><div><br></div><div>When my single master node is up only, but other nodes are still showing connected mode ....</div><div><div>gluster pool list</div><div>UUID Hostname State</div><div>89ccd72e-cb99-4b52-a2c0-388c99e5c7b3 <a href="http://master2-ib.dbt.au" target="_blank">master2-ib.dbt.au</a> Connected</div><div>d2c47fc2-f673-4790-b368-d214a58c59f4 <a href="http://compute01-ib.dbt.au" target="_blank">compute01-ib.dbt.au</a> Connected</div><div>a5608d66-a3c6-450e-a239-108668083ff2 localhost Connected</div><div>[root@master1 ~]#</div></div><div><br></div><div><br></div><div>Please advise us</div><div>Is this normal behavior Or This is issue.</div></div></blockquote><div><br></div></div></div><div>First of, we don't have any master slave configuration mode for gluster trusted storage pool i.e. peer list. Secondly, if master2 and compute01 are still reflecting as 'connected' even though they are down it means that localhost here didn't receive disconnect events for some reason. Could you restart glusterd service on this node and check the output of gluster pool list again?<br><br> <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><br></div><div>Thank You</div><span><font color="#888888"><div>Atul Yadav</div><div><br></div></font></span></div>
<br>_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br></blockquote></div><br></div></div>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div>