[Gluster-users] Xen and Gluster problem

Martí­n Eduardo Bradaschia martin.bradaschia at intercomgi.net
Tue Aug 31 12:07:35 UTC 2010


Hi !

I have this Xen over Gluster deployment

3 Xen servers, 2 Gluster servers (2 bricks each one) replicated, 1 Gbit 
dedicated network between all of them, about 200 Gbytes for 22 Xen 
virtual machines.

All was fine until this action:

- Shutdown 2nd Gluster Server
- Move it (to another rack)
- Restart the server

At this point the auto-healing option activates and all 22 virtual 
machines get blocked !

The process  iowait  had 100% of processor on 2dn gluster server.

At this moment I left just the 1st gluster server (2nd is disconnected 
from network).

All servers have debian lenny with gluster 3.05.

Questions:

1- Is there a way to run de healing in background ?
2- Is there something wrong in my configs ? (attached at bottom)

Thanx in advance !

=============== Server config ====================

#
# Se define el primer brick
#
volume posix1
   type storage/posix
   option directory /disco1/01
   option background-unlink yes          # Se aconseja cuando el 
filesystem contiene archivos de varios GB
end-volume

volume locks1
   type features/posix-locks
   option mandatory-locks on
   subvolumes posix1
end-volume

volume brick1
   type performance/io-threads
   option thread-count 8                 # Default es 16
   subvolumes locks1
end-volume

#
# Se define el segundo brick
#
volume posix2
   type storage/posix
   option directory /disco1/02
   option background-unlink yes          # Se aconseja cuando el 
filesystem contiene archivos de varios GB
end-volume

volume locks2
   type features/posix-locks
   option mandatory-locks on
   subvolumes posix2
end-volume

volume brick2
   type performance/io-threads
   option thread-count 8                 # Default es 16
   subvolumes locks2
end-volume

#
# El servidor ofreciendo ambos bricks
#
volume server
   type protocol/server
   option transport-type tcp
   option transport.socket.bind-address 10.253.2.8
   option transport.socket.listen-port 7000
   option auth.addr.brick1.allow *
   option auth.addr.brick2.allow *
   subvolumes brick1 brick2
end-volume


================ Client config ==================

#
# Se define primer brick en virgen
#
volume client1
   type protocol/client
   option transport-type tcp
   option remote-host 10.253.2.9
   option remote-port 7000
   option remote-subvolume brick1
end-volume

#
# Se define segundo brick en virgen
#
volume client2
   type protocol/client
   option transport-type tcp
   option remote-host 10.253.2.9
   option remote-port 7000
   option remote-subvolume brick2
end-volume

#
# Se define primer brick en mate
#
volume client3
   type protocol/client
   option transport-type tcp
   option remote-host 10.253.2.8
   option remote-port 7000
   option remote-subvolume brick1
end-volume

#
# Se define segundo brick en mate
#
volume client4
   type protocol/client
   option transport-type tcp
   option remote-host 10.253.2.8
   option remote-port 7000
   option remote-subvolume brick2
end-volume

#
# Se espejan los primeros brick de virgen y mate
#
volume server1
   type cluster/replicate
   subvolumes client1 client3
end-volume

#
# Se espejan los segundos brick de virgen y mate
#
volume server2
   type cluster/replicate
   subvolumes client2 client4
end-volume

#
# Se suman las replicaciones en un solo volumen completo
#
volume completo
   type cluster/distribute
   option min-free-disk 20%
   option lookup-unhashed yes
   subvolumes server1 server2
end-volume

#
# Se agregan opciones de performance
#
volume writebehind
   type performance/write-behind
   option cache-size 4MB
   subvolumes completo
end-volume

volume iocache
   type performance/io-cache
   option cache-size 64MB
   subvolumes writebehind
end-volume




-- 
Martin Eduardo Bradaschia
Intercomgi Argentina





More information about the Gluster-users mailing list