<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<br>
<br>
<div class="moz-cite-prefix">On 09/01/2015 09:21 AM, Atin Mukherjee
wrote:<br>
</div>
<blockquote
cite="mid:CAGkR8FNFGCVQp_2f2ZtxQoeSqfsaZ1wyQG0=ZE9GdHJacyfv5Q@mail.gmail.com"
type="cite">
<p dir="ltr">-Atin<br>
Sent from one plus one<br>
On Sep 1, 2015 9:39 PM, "Joe Julian" <<a
moz-do-not-send="true" href="mailto:joe@julianfamily.org"><a class="moz-txt-link-abbreviated" href="mailto:joe@julianfamily.org">joe@julianfamily.org</a></a>>
wrote:<br>
><br>
><br>
><br>
> On 09/01/2015 02:59 AM, Atin Mukherjee wrote:<br>
>><br>
>><br>
>> On 09/01/2015 02:34 PM, Joe Julian wrote:<br>
>>><br>
>>><br>
>>> On 08/31/2015 09:03 PM, Atin Mukherjee wrote:<br>
>>>><br>
>>>> On 09/01/2015 01:00 AM, Merlin Morgenstern
wrote:<br>
>>>>><br>
>>>>> this all makes sense and sounds a bit like
a solr setup :-)<br>
>>>>><br>
>>>>> I have now added the third node as a peer<br>
>>>>> sudo gluster peer probe gs3<br>
>>>>><br>
>>>>> That indeed allow me to mount the share
manually on node2 even if<br>
>>>>> node1 is<br>
>>>>> down.<br>
>>>>><br>
>>>>> BUT: It does not mount on reboot! It only
successfully mounts if<br>
>>>>> node1 is<br>
>>>>> up. I need to do a manual: sudo mount -a<br>
>>>><br>
>>>> You would need to ensure that at least two of
the nodes are up in the<br>
>>>> cluster in this case.<br>
>>><br>
>>> Atin, why? I've never had that restriction.<br>
>>><br>
>>> It sounds to me like the mount's trying to happen
before any bricks are<br>
>>> available and/or glusterd is listening.<br>
>><br>
>> In a 3 node cluster if two of the nodes are already
down and the other<br>
>> is rebooted then the GlusterD instance wouldn't start
the brick process<br>
>> until it receives the first handshake from one of its
peer and for that<br>
>> you would need your 2nd node to be up as well. This why
its recommended<br>
>> to have a 3rd dummy node (without any bricks) added to
a existing 2 node<br>
>> cluster.<br>
><br>
><br>
> Again I ask, is this a departure from prior behavior?<br>
Not really, its there from long time. IIRC, this change was done
when quorum feature was introduced. KP can correct me.<br>
</p>
</blockquote>
<br>
Right, so unless server quorum is enabled, this shouldn't be the
problem.<br>
<br>
<blockquote
cite="mid:CAGkR8FNFGCVQp_2f2ZtxQoeSqfsaZ1wyQG0=ZE9GdHJacyfv5Q@mail.gmail.com"
type="cite">
<p dir="ltr">
><br>
><br>
>>>>> Is there a particular reason for this, or
is it a misconfiguration?<br>
>>>>><br>
>>>>><br>
>>>>><br>
>>>>> 2015-08-31 21:01 GMT+02:00 Joe Julian <<a
moz-do-not-send="true" href="mailto:joe@julianfamily.org"><a class="moz-txt-link-abbreviated" href="mailto:joe@julianfamily.org">joe@julianfamily.org</a></a>>:<br>
>>>>><br>
>>>>>> On 08/31/2015 10:41 AM, Vijay Bellur
wrote:<br>
>>>>>><br>
>>>>>>> On Monday 31 August 2015 10:42 PM,
Atin Mukherjee wrote:<br>
>>>>>>><br>
>>>>>>>> > 2. Server2 dies.
Server1 has to reboot.<br>
>>>>>>>> ><br>
>>>>>>>> > In this case the
service stays down. It is inpossible to<br>
>>>>>>>> remount the<br>
>>>>>>>> share without Server1. This is
not acceptable for a High Availability<br>
>>>>>>>> System and I believe also not
intended, but a misconfiguration or<br>
>>>>>>>> bug.<br>
>>>>>>>> This is exactly what I gave as
an example in the thread (please read<br>
>>>>>>>> again). GlusterD is not
supposed to start brick process if its other<br>
>>>>>>>> counter part hasn't come up yet
in a 2 node setup. The reason it has<br>
>>>>>>>> been designed in this way is to
block GlusterD on operating on a<br>
>>>>>>>> volume<br>
>>>>>>>> which could be stale as the
node was down and cluster was operational<br>
>>>>>>>> earlier.<br>
>>>>>>>><br>
>>>>>>> For two node deployments, a third
dummy node is recommended to ensure<br>
>>>>>>> that quorum is maintained when one
of the nodes is down.<br>
>>>>>>><br>
>>>>>>> Regards,<br>
>>>>>>> Vijay<br>
>>>>>>><br>
>>>>>> Have the settings changed to enable
server quorum by default?<br>
>>>>>><br>
>>>>>>
_______________________________________________<br>
>>>>>> Gluster-users mailing list<br>
>>>>>> <a moz-do-not-send="true"
href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>>>>>> <a moz-do-not-send="true"
href="http://www.gluster.org/mailman/listinfo/gluster-users">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
>>>>>><br>
>>>>><br>
>>>>>
_______________________________________________<br>
>>>>> Gluster-users mailing list<br>
>>>>> <a moz-do-not-send="true"
href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>>>>> <a moz-do-not-send="true"
href="http://www.gluster.org/mailman/listinfo/gluster-users">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
>>>>><br>
>>><br>
><br>
> _______________________________________________<br>
> Gluster-users mailing list<br>
> <a moz-do-not-send="true"
href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> <a moz-do-not-send="true"
href="http://www.gluster.org/mailman/listinfo/gluster-users">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
</p>
</blockquote>
<br>
</body>
</html>