<div dir="ltr">It works! Thanks to craig&#39;s suggestion . i setup a third server without a brick and added it to the trusted pool. now it doesnt go down. thanks alot guys!</div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature"><div dir="ltr"><div>Thank You Kindly,</div><div>Kaamesh</div><div>Bioinformatician</div><div>Novocraft Technologies Sdn Bhd</div><div><div>C-23A-05, 3 Two Square, Section 19, 46300 Petaling Jaya</div><div>Selangor Darul Ehsan</div><div>Malaysia</div></div><div>Mobile: <a value="+60176562635" style="color:rgb(17,85,204)">+60176562635</a></div><div>Ph: <a value="+60379600541" style="color:rgb(17,85,204)">+60379600541</a></div><div>Fax: <a value="+60379600540" style="color:rgb(17,85,204)">+60379600540</a></div></div></div></div>
<br><div class="gmail_quote">On Mon, Feb 9, 2015 at 2:19 AM,  <span dir="ltr">&lt;<a href="mailto:prmarino1@gmail.com" target="_blank">prmarino1@gmail.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="background-color:rgb(255,255,255);line-height:initial"><div style="width:100%;font-size:initial;font-family:Calibri,&#39;Slate Pro&#39;,sans-serif;color:rgb(31,73,125);text-align:initial;background-color:rgb(255,255,255)">Quorum only appli‎es when you have 3 or more bricks replicating each other. In other words it doesn&#39;t mean any thing in a 2 node 2 brick cluster so it shouldn&#39;t be set.</div><div style="width:100%;font-size:initial;font-family:Calibri,&#39;Slate Pro&#39;,sans-serif;color:rgb(31,73,125);text-align:initial;background-color:rgb(255,255,255)"><br></div><div style="width:100%;font-size:initial;font-family:Calibri,&#39;Slate Pro&#39;,sans-serif;color:rgb(31,73,125);text-align:initial;background-color:rgb(255,255,255)">In other words based on your settings it&#39;s acting correctly because it thinks that the online brick needs to have a minimum of one other brick it agrees with online.</div>                                                                                                                                     <div style="width:100%;font-size:initial;font-family:Calibri,&#39;Slate Pro&#39;,sans-serif;color:rgb(31,73,125);text-align:initial;background-color:rgb(255,255,255)"><br style="display:initial"></div>                                                                                                                                     <div style="font-size:initial;font-family:Calibri,&#39;Slate Pro&#39;,sans-serif;color:rgb(31,73,125);text-align:initial;background-color:rgb(255,255,255)">Sent from my BlackBerry 10 smartphone.</div>                                                                                                                                                                                        <table width="100%" style="background-color:white;border-spacing:0px"> <tbody><tr><td colspan="2" style="font-size:initial;text-align:initial;background-color:rgb(255,255,255)">                                              <div style="border-style:solid none none;border-top-color:rgb(181,196,223);border-top-width:1pt;padding:3pt 0in 0in;font-family:Tahoma,&#39;BB Alpha Sans&#39;,&#39;Slate Pro&#39;;font-size:10pt">  <div><b>From: </b>Kaamesh Kamalaaharan</div><div><b>Sent: </b>Sunday, February 8, 2015 05:50</div><div><b>To: </b><a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a></div><div><b>Subject: </b>[Gluster-users] 2 Node glusterfs quorum help</div></div></td></tr></tbody></table><div><div class="h5"><div style="border-style:solid none none;border-top-color:rgb(186,188,209);border-top-width:1pt;font-size:initial;text-align:initial;background-color:rgb(255,255,255)"></div><br><div><div dir="ltr">Hi guys. I have a 2 node replicated gluster  setup with the quorum count set at 1 brick. By my understanding this means that the gluster will not  go down when one brick is disconnected. This however proves false and when one brick is disconnected (i just pulled it off the network) the remaining brick goes down as well and i lose my mount points on the server. <div>can anyone shed some light on whats wrong? </div><div><br></div><div>my gfs config options are as following</div><div><br></div><div><br><div><div><div>Volume Name: gfsvolume</div><div>Type: Replicate</div><div>Volume ID: a29bd2fb-b1ef-4481-be10-c2f4faf4059b</div><div>Status: Started</div><div>Number of Bricks: 1 x 2 = 2</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: gfs1:/export/sda/brick</div><div>Brick2: gfs2:/export/sda/brick</div><div>Options Reconfigured:</div><div>cluster.quorum-count: 1</div><div>auth.allow: 172.*</div><div>cluster.quorum-type: fixed</div><div>performance.cache-size: 1914589184</div><div>performance.cache-refresh-timeout: 60</div><div>cluster.data-self-heal-algorithm: diff</div><div>performance.write-behind-window-size: 4MB</div><div>nfs.trusted-write: off</div><div>nfs.addr-namelookup: off</div><div>cluster.server-quorum-type: server</div><div>performance.cache-max-file-size: 2MB</div><div>network.frame-timeout: 90</div><div>network.ping-timeout: 30</div><div>performance.quick-read: off</div><div>cluster.server-quorum-ratio: 50%</div></div><div><br></div><div><br clear="all"><div><div><div dir="ltr"><div>Thank You Kindly,</div><div>Kaamesh</div><div><br></div></div></div></div>
</div></div></div></div>
<br></div></div></div></div><br>_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://www.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br></blockquote></div><br></div>