[Gluster-users] Two nodes both as server+client

Daniel Jordan Bambach dan at lateral.net
Thu Jun 5 16:38:24 UTC 2008


Hiya all..

A scenario that seems to be a very neat solution to a basic high  
availability Webserver set up (Apache, Mysql, Python+Django) is to set  
up two machines, configure master<->master replication between the two  
MySQL databases, and then set up GlusterFS to mirror a filesystem  
between the machine that carries the Apache config, Django  
applications, and file upload folders between the machines. You can  
pull the plug on either, and things should keep running on the other.

With this in mind, I have set up an arrangement whereby each box runs  
GlusterFSD, and has a client running on them that connects the local  
server. AFR is set up at the server level, so that perhaps when/if the  
other machine goes down, the client happily carries on dealing with  
read/ write requests while the server deals with the non-existence of  
the other server.

I've set this up in a test environment, and all is working peachy, and  
we are thinking of moving to deploy this to a new production  
environment.

With this in mind, I wanted to poll the collective knowledge of this  
list to see if there are any gotchas to this set up I might have  
missed, or any obvious performance features I should be using that I  
am not.

Any help or advise would be greatly appreciated!!

Here are the current server and client configs for the two machines:

#common client config
volume initial
	type protocol/client
	option transport-type tcp/client
	option remote-host localhost
	option remote-subvolume data	
end-volume

volume readahead
	type performance/read-ahead
	option page-size 128kB		# 256KB is the default option
	option page-count 4			# 2 is default option
	option force-atime-update off	# default is off
	subvolumes initial
end-volume

volume data
	type performance/booster
	subvolumes readahead
end-volume

#latsrv1 - server config for box 1
volume posix
  type storage/posix                   # POSIX FS translator
  option directory /home/export        # Export this directory
end-volume

volume brick-latsrv1
type features/posix-locks
subvolumes posix
end-volume

volume brick-latsrv2
  type protocol/client
  option transport-type tcp/client
  option remote-host latsrv2
  option remote-subvolume brick-latsrv2
end-volume

volume brick-afr
type cluster/afr
subvolumes brick-latsrv1 brick-latsrv2
option read-subvolume brick-latsrv1
end-volume

volume data
type performance/io-threads
option thread-count 8
option cache-size 64MB
subvolumes brick-afr
end-volume

volume server
  type protocol/server
  option transport-type tcp/server	# For TCP/IP transport
  option auth.ip.data.allow *		# Allow access to "brick" volume
  option auth.ip.brick-latsrv1.allow *
  subvolumes data brick-latsrv1
end-volume

#latsrv2 - server config for box 2
volume posix
  type storage/posix                   # POSIX FS translator
  option directory /home/export        # Export this directory
end-volume

volume brick-latsrv2
type features/posix-locks
subvolumes posix
end-volume

volume brick-latsrv1
  type protocol/client
  option transport-type tcp/client
  option remote-host latsrv1
  option remote-subvolume brick-latsrv1
end-volume

volume brick-afr
	type cluster/afr
	subvolumes brick-latsrv1 brick-latsrv2
	option read-subvolume brick-latsrv2
end-volume

volume data
	type performance/io-threads
	option thread-count 8
	option cache-size 64MB
	subvolumes brick-afr
end-volume

volume server
  type protocol/server
  option transport-type tcp/server	# For TCP/IP transport
  option auth.ip.data.allow *		# Allow access to "brick" volume
  option auth.ip.brick-latsrv2.allow *
  subvolumes data brick-latsrv2
end-volume





More information about the Gluster-users mailing list