<div dir="ltr"><div><div><div>Hi Atin,<br><br></div>You are right!!! I was using the version 3.5 in production. And when I've checked the Gluster source code, I checked the wrong commit (not the latest commit in the master branch).<br><br></div>Currently, you've already implemented my the proposed solution. It was done at the function gd_peerinfo_find_from_addrinfo, file xlators/mgmt/glusterd/src/glusterd-peer-utils.c.<br><br></div><div class="gmail_extra">Thanks for your tip! And sorry for any inconvenience.<br><div><div class="gmail_signature"><div dir="ltr"><br>--<br><b>Rarylson Freitas</b><br></div></div></div>
<br><div class="gmail_quote">On Thu, Jul 2, 2015 at 2:01 AM, Atin Mukherjee <span dir="ltr"><<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Which gluster version are you using? Better peer identification feature<br>
(available 3.6 onwards) should tackle this problem IMO.<br>
<br>
~Atin<br>
<div><div class="h5"><br>
On 07/02/2015 10:05 AM, Rarylson Freitas wrote:<br>
> Hi,<br>
><br>
> Recently, my company needed to change our hostnames used in the Gluster<br>
> Pool.<br>
><br>
> In a first moment, we have two Gluster Nodes called storage1 and storage2.<br>
> Our volumes used two bricks: storage1:/MYVOLYME and storage2:/MYVOLUME. We<br>
> put the storage1 and storage2 IPs in the /etc/hosts file of our nodes and<br>
> in our client servers.<br>
><br>
> After some time, more client servers started to using Gluster and we<br>
> discovered that using hostnames without domain (using /etc/hosts) in all<br>
> client servers is a pain in the a$$ :(. So, we decided to change them to<br>
> something like <a href="http://storage1.mydomain.com" rel="noreferrer" target="_blank">storage1.mydomain.com</a> and <a href="http://storage2.mydomain.com" rel="noreferrer" target="_blank">storage2.mydomain.com</a>.<br>
><br>
> Remember that, at this point, we had already some volumes (with bricks):<br>
><br>
> $ gluster volume info MYVOL<br>
> [...]<br>
> Brick1: storage1:/MYDIR<br>
> Brick1: storage2:/MYDIR<br>
><br>
> For simplicity, let's consider that we had two Gluster Nodes, each one with<br>
> the following entries in /etc/hosts:<br>
><br>
> 10.10.10.1 storage1<br>
> 10.10.10.2 storage2<br>
><br>
> To implement the hostname changes, we've changed the etc hosts file to:<br>
><br>
> 10.10.10.1 storage1 <a href="http://storage1.mydomain.com" rel="noreferrer" target="_blank">storage1.mydomain.com</a><br>
> 10.10.10.2 storage2 <a href="http://storage2.mydomain.com" rel="noreferrer" target="_blank">storage2.mydomain.com</a><br>
><br>
> And we've run in storage1:<br>
><br>
> $ gluster peer probe <a href="http://storage2.mydomain.com" rel="noreferrer" target="_blank">storage2.mydomain.com</a><br>
> peer probe: success<br>
><br>
> Everything works well during some time, but the glusterd starts to fail<br>
> after any reboot:<br>
><br>
> $ service glusterfs-server status<br>
> glusterfs-server start/running, process 14714<br>
> $ service glusterfs-server restart<br>
> glusterfs-server stop/waiting<br>
> glusterfs-server start/running, process 14860<br>
> $ service glusterfs-server status<br>
> glusterfs-server stop/waiting<br>
><br>
> To start the service again, it was necessary to rollback the hostname1<br>
> config to storage2 in /var/lib/glusterd/peers/OUR_UUID.<br>
><br>
> After some try and error, we discovered that if we change the order of the<br>
> entries in /etc/hosts and repeat the process, everything worked.<br>
><br>
> It is, from:<br>
><br>
> 10.10.10.1 storage1 <a href="http://storage1.mydomain.com" rel="noreferrer" target="_blank">storage1.mydomain.com</a><br>
> 10.10.10.2 storage2 <a href="http://storage2.mydomain.com" rel="noreferrer" target="_blank">storage2.mydomain.com</a><br>
><br>
> To:<br>
><br>
> 10.10.10.1 <a href="http://storage1.mydomain.com" rel="noreferrer" target="_blank">storage1.mydomain.com</a> storage1<br>
> 10.10.10.2 <a href="http://storage2.mydomain.com" rel="noreferrer" target="_blank">storage2.mydomain.com</a> storage2<br>
><br>
> And run:<br>
><br>
> gluster peer probe <a href="http://storage2.mydomain.com" rel="noreferrer" target="_blank">storage2.mydomain.com</a><br>
> service glusterfs-server restart<br>
><br>
> So we've checked the Glusterd debug log and checked the GlusterFS source<br>
> code and discovered that the big secret was the function<br>
> glusterd_friend_find_by_hostname, in the file<br>
> xlators/mgmt/glusterd/src/glusterd-utils.c. This function is called for<br>
> each brick that isn't a local brick and does the following things:<br>
><br>
</div></div>> - It checks if the brick hostname is equal to some peer hostname;<br>
> - If it's, this peer is our wanted friend;<br>
> - If not, it gets the brick IP (resolves the hostname using the function<br>
<span class="">> getaddrinfo) and checks if the brick IP is equal to the peer hostname;<br>
</span>> - It is, we could run gluster peer probe 10.10.10.2. Once the brick<br>
<span class="">> IP (storage2 resolves to 10.10.10.2) would have equal to the peer<br>
> "hostname" (10.10.10.2);<br>
</span>> - If it's, this peer is our wanted friend;<br>
> - If not, gets the reverse of the brick IP (using the function<br>
<span class="">> getnameinfo) and checks if the brick reverse is equal to the peer<br>
> hostname;<br>
</span>> - This is why changing the order of the entries in /etc/hosts worked<br>
<span class="">> as an workaround for us;<br>
</span>> - If not, returns and error (and Glusterd will fail).<br>
<span class="">><br>
> However, we think that comparing the brick IP (resolving the brick<br>
> hostname) and the peer IP (resolving the peer hostname) would be a simpler<br>
> and more comprehensive solution. Once both brick and peer will have<br>
> difference hostnames, but the same IP, it would work.<br>
><br>
> The solution could be:<br>
><br>
</span>> - It checks if the brick hostname is equal to some peer hostname;<br>
> - If it's, this peer is our wanted friend;<br>
> - If not, it gets both the brick IP (resolves the hostname using the<br>
<span class="">> function getaddrinfo) and the peer IP (resolves the peer hostname) and,<br>
> for each IP pair, check if a brick IP is equal to a peer IP;<br>
</span>> - If it's, this peer is our wanted friend;<br>
> - If not, returns and error (and Glusterd will fail).<br>
<span class="">><br>
> What do you think about it?<br>
> --<br>
><br>
</span>> *Rarylson Freitas*<br>
> Computer Engineer<br>
><br>
><br>
><br>
> _______________________________________________<br>
> Gluster-devel mailing list<br>
> <a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> <a href="http://www.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-devel</a><br>
><br>
<span class="HOEnZb"><font color="#888888"><br>
--<br>
~Atin<br>
</font></span></blockquote></div><br></div></div>