[Gluster-users] Gluster Storage Platform 3.0.5 fails to detect new nodes

Craig Carl craig at gluster.com
Fri Sep 24 03:10:34 UTC 2010


Kevin - 
Some good suggestions, I'll make sure they get to product management and engineering, thanks. 



Craig 

-- 
Craig Carl 













Sales Engineer; Gluster, Inc. 
Cell - ( 408) 829-9953 (California, USA) 
Office - ( 408) 770-1884 
Gtalk - craig.carl at gmail.com 
Twitter - @gluster 
Installing Gluster Storage Platform, the movie! 
http://rackerhacker.com/2010/08/11/one-month-with-glusterfs-in-production/ 



From: brooks at netgate.net 
To: "Craig Carl" <craig at gluster.com> 
Cc: gluster-users at gluster.org 
Sent: Wednesday, September 22, 2010 8:23:47 AM 
Subject: Re: [Gluster-users] Gluster Storage Platform 3.0.5 fails to detect new nodes 


Thank you. The problem turned out to be a bad GigE card in the new node. 
The information both you and Bala provided was extremely helpful in 
isolating and resolving the cause. I'm going to replace the card today 
and I'm sure things will start to work correctly. 

I did notice that the IP address range associated with the 
/etc/dnsmasq.conf does have some issues. 

1. The addresses associated with the pool must be the same as 
those associated with eth0 or the dhcp IP address assignment to 
a new node will cause the install to fail. That should be made 
more clear. But, like Bala said the installer should not 
expect to use eth0 (fixed in 3.1). With that fix in place I 
would expect that the dhcp addresses could to be attached to 
any interface and it would work as expected. 

2. The range configured in the GUI doesn't match the range defined 
in the conf file. Example, I have 10.0.0.102 - 10.0.0.110 
configured in the GUI for the storage pool yet the line in the 
dnsmasq.conf file reads: 

dhcp-range=10.0.0.110,10.0.0.110,5 

That may be by design, but given that configuration I can see 
how having the last address in the pool being used would cause 
the installer to fail. I guess you guys manage the DHCP pool 
one address at a time, decrementing it as new nodes are 
installed. You might consider using link-local addressing to 
facilitate the install process making it transparent to the end 
user. 

It would be nice to include a utility in the installer to help identify 
the Ethernet cards. Different kernels bring up the cards in different 
orders so it's not always easy to know which card is which. A tool like 
ethtool that allows you to see link is usually good enough to help you map 
the cards to a interface name. 

I would also suggest letting people know that there is a local caching DNS 
server running which is why the first DNS server used in the install 
process defaults to the local host IP address. 

Thanks again, 
Kevin 


More information about the Gluster-users mailing list