<div dir="ltr">Personally I'd be much more interested in development/testing resources going into large scale glusterfs clusters, rather than small office setups or home use. Keep in mind this is a PB scale filesystem clustering technology.<div><br></div><div>For home use I don't really see what advantage replica 2 would provide. I'd probably do two single nodes, and have the primary node geo-replicate to the secondary node so my data was intact if the primary node failed. In a small office I could switch the DNS record to the 2nd node for failover. In fact I probably wouldn't (and don't) use gluster at home at all, there are other volume managers with snapshots and send/receive capabilities that suit a small environment.</div><div><br></div><div>Really if your data is important at such a small scale, I'd be looking at a single file server and cloud replication. S3 is $3/month for 100GB of data, and $60/month for 2TB of data, can store multiple versions, and can move old versions into glacial storage. Any individual/small business should be able to determine the worth of its data and determine how much of it they want to pay to backup. Over 3 years it might even be cheaper than a 2nd node + dealing with maintenance/split-brains.</div><div><br></div><div>BTW I agree with your issues in regards to releases. I've found the best method is to stick to a branch marked as stable. I tested 3.7.3 and it was a bit of a disaster, but 3.6.6 hasn't given me any grief yet.</div><div><br></div><div>Steve</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Oct 30, 2015 at 6:40 AM, Mauro M. <span dir="ltr"><<a href="mailto:gluster@ezplanet.net" target="_blank">gluster@ezplanet.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Atin,<br>
<br>
Sorry I should have said that the design does not suit the needs of an<br>
ON/STANDBY cluster configuration and I would like it to be changed to<br>
cater for this popular use case for home and small office applications.<br>
<br>
Up to relase 3.5 it was perfect and beside I had never experienced split<br>
brain situations, actually until I was on 3.5 I did not even realize there<br>
could be split brains (I am a use without reading the manuals guy, if I<br>
had to add the time necessary to read the manuals of everything I use I<br>
would become 190 before I am done with it). I skipped 3.6 altogether<br>
because 3.6.1 did not even start my bricks. Later I upgraded to 3.7 and<br>
that is when troubles started: split brains that periodically pop up even<br>
through I never have a case where files are accessed at the same time from<br>
two nodes (I am the only user of my systems and the second node is only<br>
there to replicate), and issues getting the cluster to work single node.<br>
<br>
Mauro<br>
<div><div class="h5"><br>
On Fri, October 30, 2015 12:14, Atin Mukherjee wrote:<br>
> -Atin<br>
> Sent from one plus one<br>
> On Oct 30, 2015 5:28 PM, "Mauro Mozzarelli" <<a href="mailto:mauro@ezplanet.net">mauro@ezplanet.net</a>> wrote:<br>
>><br>
>> Hi,<br>
>><br>
>> Atin keeps giving the same answer: "it is by design"<br>
>><br>
>> I keep saying "the design is wrong and it should be changed to cater for<br>
>> standby servers"<br>
> Every design has got its own set of limitations and i would say this is a<br>
> limitation instead of mentioning the overall design itself wrong. I would<br>
> again stand with my points as correctness is always a priority in a<br>
> distributed system. This behavioural change was introduced in 3.5 and if<br>
> this was not included as part of release note I apologize on behalf of the<br>
> release management.<br>
> As communicated earlier, we will definitely resolve this issue in<br>
> GlusterD2.<br>
>><br>
>> In the meantime this is the workaround I am using:<br>
>> When the single node starts I stop and start the volume, and then it<br>
>> becomes mountable. On CentOS 6 and CentOS 7 it works with release up to<br>
>> 3.7.4. Release 3.7.5 is broken so I reverted back to 3.7.4.<br>
> This is where I am not convinced. An explicit volume start should start<br>
> the<br>
> bricks, can you raise a BZ with all the relevant details?<br>
>><br>
>> In my experience glusterfs releases are a bit of a hit and miss. Often<br>
>> something stops working with newer releases, then after a few more<br>
>> releases it works again or there is a workaround ... Not quite the<br>
>> stability one would want for commercial use, and thus at the moment I<br>
>> can<br>
>> risk using it only for my home servers, hence the cluster with a node<br>
>> always ON and the second as STANDBY.<br>
>><br>
>> MOUNT=/home<br>
>> LABEL="GlusterFS:"<br>
>> if grep -qs $MOUNT /proc/mounts; then<br>
>> echo "$LABEL $MOUNT is mounted";<br>
>> gluster volume start gv_home 2>/dev/null<br>
>> else<br>
>> echo "$LABEL $MOUNT is NOT mounted";<br>
>> echo "$LABEL Restarting gluster volume ..."<br>
>> yes|gluster volume stop gv_home > /dev/null<br>
>> gluster volume start gv_home<br>
>> mount -t glusterfs sirius-ib:/gv_home $MOUNT;<br>
>> if grep -qs $MOUNT /proc/mounts; then<br>
>> echo "$LABEL $MOUNT is mounted";<br>
>> gluster volume start gv_home 2>/dev/null<br>
>> else<br>
>> echo "$LABEL failure to mount $MOUNT";<br>
>> fi<br>
>> fi<br>
>><br>
>> I hope this helps.<br>
>> Mauro<br>
>><br>
>> On Fri, October 30, 2015 11:48, Atin Mukherjee wrote:<br>
>> > -Atin<br>
>> > Sent from one plus one<br>
>> > On Oct 30, 2015 4:35 PM, "Remi Serrano" <<a href="mailto:rserrano@pros.com">rserrano@pros.com</a>> wrote:<br>
>> >><br>
>> >> Hello,<br>
>> >><br>
>> >><br>
>> >><br>
>> >> I setup a gluster file cluster with 2 nodes. It works fine.<br>
>> >><br>
>> >> But, when I shut down the 2 nodes, and startup only one node, I<br>
>> cannot<br>
>> > mount the share :<br>
>> >><br>
>> >><br>
>> >><br>
>> >> [root@xxx ~]# mount -t glusterfs 10.32.0.11:/gv0 /glusterLocalShare<br>
>> >><br>
>> >> Mount failed. Please check the log file for more details.<br>
>> >><br>
>> >><br>
>> >><br>
>> >> Log says :<br>
>> >><br>
>> >> [2015-10-30 10:33:26.147003] I [MSGID: 100030]<br>
>> [glusterfsd.c:2318:main]<br>
>> > 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version<br>
>> 3.7.5<br>
>> > (args: /usr/sbin/glusterfs -127.0.0.1 --volfile-id=/gv0<br>
>> > /glusterLocalShare)<br>
>> >><br>
>> >> [2015-10-30 10:33:26.171964] I [MSGID: 101190]<br>
>> > [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started<br>
>> thread<br>
>> > with index 1<br>
>> >><br>
>> >> [2015-10-30 10:33:26.185685] I [MSGID: 101190]<br>
>> > [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started<br>
>> thread<br>
>> > with index 2<br>
>> >><br>
>> >> [2015-10-30 10:33:26.186972] I [MSGID: 114020] [client.c:2118:notify]<br>
>> > 0-gv0-client-0: parent translators are ready, attempting connect on<br>
>> > transport<br>
>> >><br>
>> >> [2015-10-30 10:33:26.191823] I [MSGID: 114020] [client.c:2118:notify]<br>
>> > 0-gv0-client-1: parent translators are ready, attempting connect on<br>
>> > transport<br>
>> >><br>
>> >> [2015-10-30 10:33:26.192209] E [MSGID: 114058]<br>
>> > [client-handshake.c:1524:client_query_portmap_cbk] 0-gv0-client-0:<br>
> failed<br>
>> > to get the port number for remote subvolume. Please ume status' on<br>
> server<br>
>> > to see if brick process is running.<br>
>> >><br>
>> >> [2015-10-30 10:33:26.192339] I [MSGID: 114018]<br>
>> > [client.c:2042:client_rpc_notify] 0-gv0-client-0: disconnected from<br>
>> > gv0-client-0. Client process will keep trying to connect t brick's<br>
>> port<br>
> is<br>
>> > available<br>
>> >><br>
>> >><br>
>> >><br>
>> >> And when I check the volumes I get:<br>
>> >><br>
>> >> [root@xxx ~]# gluster volume status<br>
>> >><br>
>> >> Status of volume: gv0<br>
>> >><br>
>> >> Gluster process TCP Port RDMA Port<br>
>> Online<br>
>> > Pid<br>
>> >><br>
>> >><br>
>> ><br>
> ------------------------------------------------------------------------------<br>
>> >><br>
>> >> Brick 10.32.0.11:/glusterBrick1/gv0 N/A N/A N<br>
>> > N/A<br>
>> >><br>
>> >> NFS Server on localhost N/A N/A N<br>
>> > N/A<br>
>> >><br>
>> >> NFS Server on localhost N/A N/A N<br>
>> > N/A<br>
>> >><br>
>> >><br>
>> >><br>
>> >> Task Status of Volume gv0<br>
>> >><br>
>> >><br>
>> ><br>
> ------------------------------------------------------------------------------<br>
>> >><br>
>> >> There are no active volume tasks<br>
>> >><br>
>> >><br>
>> >><br>
>> >> If I start th second node, all is OK.<br>
>> >><br>
>> >><br>
>> >><br>
>> >> Is this normal ?<br>
>> > This behaviour is by design. In a multi node cluster when GlusterD<br>
>> comes<br>
>> > up<br>
>> > it doesn't start the bricks until it receives the configuration from<br>
>> its<br>
>> > one of the friends to ensure that stale information is not been<br>
> referred.<br>
>> > In your case since the other node is down bricks are not started and<br>
> hence<br>
>> > mount fails.<br>
>> > As a workaround, we recommend to add a dummy node to the cluster to<br>
> avoid<br>
>> > this issue.<br>
>> >><br>
>> >><br>
>> >><br>
>> >> Regards,<br>
>> >><br>
>> >><br>
>> >><br>
</div></div>>> >> Rémi<br>
<div class="HOEnZb"><div class="h5">>> >><br>
>> >><br>
>> >><br>
>> >><br>
>> >> _______________________________________________<br>
>> >> Gluster-users mailing list<br>
>> >> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> >> <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
>> > _______________________________________________<br>
>> > Gluster-users mailing list<br>
>> > <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> > <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
>><br>
>><br>
>> --<br>
>> Mauro Mozzarelli<br>
>> Phone: <a href="tel:%2B44%207941%20727378" value="+447941727378">+44 7941 727378</a><br>
>> eMail: <a href="mailto:mauro@ezplanet.net">mauro@ezplanet.net</a><br>
>><br>
> _______________________________________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
<br>
<br>
--<br>
Mauro Mozzarelli<br>
Phone: <a href="tel:%2B44%207941%20727378" value="+447941727378">+44 7941 727378</a><br>
eMail: <a href="mailto:mauro@ezplanet.net">mauro@ezplanet.net</a><br>
<br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
</div></div></blockquote></div><br></div>