<html><body><div style="color:#000; background-color:#fff; font-family:HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font-size:16px"><div id="yui_3_16_0_1_1423468682474_4910">This seems to be a workaround, isn't there another proper way with the configuration of the volume to achieve this? I would not like to have to setup a third fake server just in order to avoid that.</div><br><div class="qtdSeparateBR"><br><br></div><div style="display: block;" class="yahoo_quoted"> <div style="font-family: HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif; font-size: 16px;"> <div style="font-family: HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif; font-size: 16px;"> <div dir="ltr"> <font size="2" face="Arial"> On Monday, February 9, 2015 2:27 AM, Kaamesh Kamalaaharan &lt;kaamesh@novocraft.com&gt; wrote:<br> </font> </div>  <br><br> <div class="y_msg_container"><div id="yiv9466236904"><div><div dir="ltr">It works! Thanks to craig's suggestion . i setup a third server without a brick and added it to the trusted pool. now it doesnt go down. thanks alot guys!</div><div class="yiv9466236904gmail_extra"><br clear="all"><div><div class="yiv9466236904gmail_signature"><div dir="ltr"><div>Thank You Kindly,</div><div>Kaamesh</div><div>Bioinformatician</div><div>Novocraft Technologies Sdn Bhd</div><div><div>C-23A-05, 3 Two Square, Section 19, 46300 Petaling Jaya</div><div>Selangor Darul Ehsan</div><div>Malaysia</div></div><div>Mobile:&nbsp;<a href="" rel="nofollow" shape="rect" style="color:rgb(17,85,204);">+60176562635</a></div><div>Ph:&nbsp;<a href="" rel="nofollow" shape="rect" style="color:rgb(17,85,204);">+60379600541</a></div><div>Fax:&nbsp;<a href="" rel="nofollow" shape="rect" style="color:rgb(17,85,204);">+60379600540</a></div></div></div></div>
<br clear="none"><div class="yiv9466236904gmail_quote">On Mon, Feb 9, 2015 at 2:19 AM,  <span dir="ltr">&lt;<a rel="nofollow" shape="rect" ymailto="mailto:prmarino1@gmail.com" target="_blank" href="mailto:prmarino1@gmail.com">prmarino1@gmail.com</a>&gt;</span> wrote:<br clear="none"><blockquote class="yiv9466236904gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div class="yiv9466236904yqt9137024278" id="yiv9466236904yqt49489"><div style="background-color:rgb(255,255,255);"><div style="width:100%;font-family:Calibri, 'Slate Pro', sans-serif;color:rgb(31,73,125);background-color:rgb(255,255,255);">Quorum only appli‎es when you have 3 or more bricks replicating each other. In other words it doesn't mean any thing in a 2 node 2 brick cluster so it shouldn't be set.</div><div style="width:100%;font-family:Calibri, 'Slate Pro', sans-serif;color:rgb(31,73,125);background-color:rgb(255,255,255);"><br clear="none"></div><div style="width:100%;font-family:Calibri, 'Slate Pro', sans-serif;color:rgb(31,73,125);background-color:rgb(255,255,255);">In other words based on your settings it's acting correctly because it thinks that the online brick needs to have a minimum of one other brick it agrees with online.</div>                                                                                                                                     <div style="width:100%;font-family:Calibri, 'Slate Pro', sans-serif;color:rgb(31,73,125);background-color:rgb(255,255,255);"><br style="" clear="none"></div>                                                                                                                                     <div style="font-family:Calibri, 'Slate Pro', sans-serif;color:rgb(31,73,125);background-color:rgb(255,255,255);">Sent from my BlackBerry 10 smartphone.</div>                                                                                                                                                                                        <table style="background-color:white;border-spacing:0px;" width="100%"><tbody><tr><td colspan="2" rowspan="1" style="background-color:rgb(255,255,255);">                                              <div style="border-style:solid none none;border-top-color:rgb(181,196,223);border-top-width:1pt;padding:3pt 0in 0in;font-family:Tahoma, 'BB Alpha Sans', 'Slate Pro';font-size:10pt;">  <div><b>From: </b>Kaamesh Kamalaaharan</div><div><b>Sent: </b>Sunday, February 8, 2015 05:50</div><div><b>To: </b><a rel="nofollow" shape="rect" ymailto="mailto:gluster-users@gluster.org" target="_blank" href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a></div><div><b>Subject: </b>[Gluster-users] 2 Node glusterfs quorum help</div></div></td></tr></tbody></table><div><div class="yiv9466236904h5"><div style="border-style:solid none none;border-top-color:rgb(186,188,209);border-top-width:1pt;background-color:rgb(255,255,255);"></div><br clear="none"><div><div dir="ltr">Hi guys. I have a 2 node replicated&nbsp;gluster &nbsp;setup with the quorum count set at 1 brick. By my understanding this means that the gluster will not &nbsp;go down when one brick is disconnected. This however proves false and when one brick is disconnected (i just pulled it off the network) the remaining brick goes down as well and i lose my mount points on the server.&nbsp;<div>can anyone shed some light on whats wrong?&nbsp;</div><div><br clear="none"></div><div>my gfs config options are as following</div><div><br clear="none"></div><div><br clear="none"><div><div><div>Volume Name: gfsvolume</div><div>Type: Replicate</div><div>Volume ID: a29bd2fb-b1ef-4481-be10-c2f4faf4059b</div><div>Status: Started</div><div>Number of Bricks: 1 x 2 = 2</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: gfs1:/export/sda/brick</div><div>Brick2: gfs2:/export/sda/brick</div><div>Options Reconfigured:</div><div>cluster.quorum-count: 1</div><div>auth.allow: 172.*</div><div>cluster.quorum-type: fixed</div><div>performance.cache-size: 1914589184</div><div>performance.cache-refresh-timeout: 60</div><div>cluster.data-self-heal-algorithm: diff</div><div>performance.write-behind-window-size: 4MB</div><div>nfs.trusted-write: off</div><div>nfs.addr-namelookup: off</div><div>cluster.server-quorum-type: server</div><div>performance.cache-max-file-size: 2MB</div><div>network.frame-timeout: 90</div><div>network.ping-timeout: 30</div><div>performance.quick-read: off</div><div>cluster.server-quorum-ratio: 50%</div></div><div><br clear="none"></div><div><br clear="all"><div><div><div dir="ltr"><div>Thank You Kindly,</div><div>Kaamesh</div><div><br clear="none"></div></div></div></div>
</div></div></div></div>
<br clear="none"></div></div></div></div></div><br clear="none">_______________________________________________<br clear="none">
Gluster-users mailing list<br clear="none">
<a rel="nofollow" shape="rect" ymailto="mailto:Gluster-users@gluster.org" target="_blank" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br clear="none">
<a rel="nofollow" shape="rect" target="_blank" href="http://www.gluster.org/mailman/listinfo/gluster-users">http://www.gluster.org/mailman/listinfo/gluster-users</a><br clear="none"></blockquote></div><br clear="none"></div></div></div><br><div class="yqt9137024278" id="yqt60105">_______________________________________________<br clear="none">Gluster-users mailing list<br clear="none"><a shape="rect" ymailto="mailto:Gluster-users@gluster.org" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br clear="none"><a shape="rect" href="http://www.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a></div><br><br></div>  </div> </div>  </div> </div></body></html>