<p dir="ltr"></p>
<p dir="ltr">-Atin<br>
Sent from one plus one<br>
On Oct 30, 2015 5:28 PM, "Mauro Mozzarelli" <<a href="mailto:mauro@ezplanet.net">mauro@ezplanet.net</a>> wrote:<br>
><br>
> Hi,<br>
><br>
> Atin keeps giving the same answer: "it is by design"<br>
><br>
> I keep saying "the design is wrong and it should be changed to cater for<br>
> standby servers"<br>
Every design has got its own set of limitations and i would say this is a limitation instead of mentioning the overall design itself wrong. I would again stand with my points as correctness is always a priority in a distributed system. This behavioural change was introduced in 3.5 and if this was not included as part of release note I apologize on behalf of the release management. <br>
As communicated earlier, we will definitely resolve this issue in GlusterD2.<br>
><br>
> In the meantime this is the workaround I am using:<br>
> When the single node starts I stop and start the volume, and then it<br>
> becomes mountable. On CentOS 6 and CentOS 7 it works with release up to<br>
> 3.7.4. Release 3.7.5 is broken so I reverted back to 3.7.4.<br>
This is where I am not convinced. An explicit volume start should start the bricks, can you raise a BZ with all the relevant details?<br>
><br>
> In my experience glusterfs releases are a bit of a hit and miss. Often<br>
> something stops working with newer releases, then after a few more<br>
> releases it works again or there is a workaround ... Not quite the<br>
> stability one would want for commercial use, and thus at the moment I can<br>
> risk using it only for my home servers, hence the cluster with a node<br>
> always ON and the second as STANDBY.<br>
><br>
> MOUNT=/home<br>
> LABEL="GlusterFS:"<br>
> if grep -qs $MOUNT /proc/mounts; then<br>
> echo "$LABEL $MOUNT is mounted";<br>
> gluster volume start gv_home 2>/dev/null<br>
> else<br>
> echo "$LABEL $MOUNT is NOT mounted";<br>
> echo "$LABEL Restarting gluster volume ..."<br>
> yes|gluster volume stop gv_home > /dev/null<br>
> gluster volume start gv_home<br>
> mount -t glusterfs sirius-ib:/gv_home $MOUNT;<br>
> if grep -qs $MOUNT /proc/mounts; then<br>
> echo "$LABEL $MOUNT is mounted";<br>
> gluster volume start gv_home 2>/dev/null<br>
> else<br>
> echo "$LABEL failure to mount $MOUNT";<br>
> fi<br>
> fi<br>
><br>
> I hope this helps.<br>
> Mauro<br>
><br>
> On Fri, October 30, 2015 11:48, Atin Mukherjee wrote:<br>
> > -Atin<br>
> > Sent from one plus one<br>
> > On Oct 30, 2015 4:35 PM, "Remi Serrano" <<a href="mailto:rserrano@pros.com">rserrano@pros.com</a>> wrote:<br>
> >><br>
> >> Hello,<br>
> >><br>
> >><br>
> >><br>
> >> I setup a gluster file cluster with 2 nodes. It works fine.<br>
> >><br>
> >> But, when I shut down the 2 nodes, and startup only one node, I cannot<br>
> > mount the share :<br>
> >><br>
> >><br>
> >><br>
> >> [root@xxx ~]# mount -t glusterfs 10.32.0.11:/gv0 /glusterLocalShare<br>
> >><br>
> >> Mount failed. Please check the log file for more details.<br>
> >><br>
> >><br>
> >><br>
> >> Log says :<br>
> >><br>
> >> [2015-10-30 10:33:26.147003] I [MSGID: 100030] [glusterfsd.c:2318:main]<br>
> > 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.5<br>
> > (args: /usr/sbin/glusterfs -127.0.0.1 --volfile-id=/gv0<br>
> > /glusterLocalShare)<br>
> >><br>
> >> [2015-10-30 10:33:26.171964] I [MSGID: 101190]<br>
> > [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread<br>
> > with index 1<br>
> >><br>
> >> [2015-10-30 10:33:26.185685] I [MSGID: 101190]<br>
> > [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread<br>
> > with index 2<br>
> >><br>
> >> [2015-10-30 10:33:26.186972] I [MSGID: 114020] [client.c:2118:notify]<br>
> > 0-gv0-client-0: parent translators are ready, attempting connect on<br>
> > transport<br>
> >><br>
> >> [2015-10-30 10:33:26.191823] I [MSGID: 114020] [client.c:2118:notify]<br>
> > 0-gv0-client-1: parent translators are ready, attempting connect on<br>
> > transport<br>
> >><br>
> >> [2015-10-30 10:33:26.192209] E [MSGID: 114058]<br>
> > [client-handshake.c:1524:client_query_portmap_cbk] 0-gv0-client-0: failed<br>
> > to get the port number for remote subvolume. Please ume status' on server<br>
> > to see if brick process is running.<br>
> >><br>
> >> [2015-10-30 10:33:26.192339] I [MSGID: 114018]<br>
> > [client.c:2042:client_rpc_notify] 0-gv0-client-0: disconnected from<br>
> > gv0-client-0. Client process will keep trying to connect t brick's port is<br>
> > available<br>
> >><br>
> >><br>
> >><br>
> >> And when I check the volumes I get:<br>
> >><br>
> >> [root@xxx ~]# gluster volume status<br>
> >><br>
> >> Status of volume: gv0<br>
> >><br>
> >> Gluster process TCP Port RDMA Port Online<br>
> > Pid<br>
> >><br>
> >><br>
> > ------------------------------------------------------------------------------<br>
> >><br>
> >> Brick 10.32.0.11:/glusterBrick1/gv0 N/A N/A N<br>
> > N/A<br>
> >><br>
> >> NFS Server on localhost N/A N/A N<br>
> > N/A<br>
> >><br>
> >> NFS Server on localhost N/A N/A N<br>
> > N/A<br>
> >><br>
> >><br>
> >><br>
> >> Task Status of Volume gv0<br>
> >><br>
> >><br>
> > ------------------------------------------------------------------------------<br>
> >><br>
> >> There are no active volume tasks<br>
> >><br>
> >><br>
> >><br>
> >> If I start th second node, all is OK.<br>
> >><br>
> >><br>
> >><br>
> >> Is this normal ?<br>
> > This behaviour is by design. In a multi node cluster when GlusterD comes<br>
> > up<br>
> > it doesn't start the bricks until it receives the configuration from its<br>
> > one of the friends to ensure that stale information is not been referred.<br>
> > In your case since the other node is down bricks are not started and hence<br>
> > mount fails.<br>
> > As a workaround, we recommend to add a dummy node to the cluster to avoid<br>
> > this issue.<br>
> >><br>
> >><br>
> >><br>
> >> Regards,<br>
> >><br>
> >><br>
> >><br>
> >> Rémi<br>
> >><br>
> >><br>
> >><br>
> >><br>
> >> _______________________________________________<br>
> >> Gluster-users mailing list<br>
> >> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> >> <a href="http://www.gluster.org/mailman/listinfo/gluster-users">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
> > _______________________________________________<br>
> > Gluster-users mailing list<br>
> > <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> > <a href="http://www.gluster.org/mailman/listinfo/gluster-users">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
><br>
><br>
> --<br>
> Mauro Mozzarelli<br>
> Phone: +44 7941 727378<br>
> eMail: <a href="mailto:mauro@ezplanet.net">mauro@ezplanet.net</a><br>
><br>
</p>