[Gluster-users] big file over glusterfs_nfs

查舒玉 zhashuyu at 163.com
Sun Feb 19 11:09:22 UTC 2012


Hi,

Recently, I use glusterfs(3.2.5) server as xenserver(6.0) vms backend,  the details comes behind:
The linux system is centso-6.2-x86_64, I mount two 1T-disk in /gluster01 and /gluster02 in all 3 mechines.
I create a distribute volume named xen_bk_vol:
gluster> volume info xen_bk_vol
Volume Name: xen_bk_vol
Type: Distribute
Status: Stopped
Number of Bricks: 6
Transport-type: tcp
Bricks:
Brick1: 10.52.10.5:/gluster01/xen_bk_vol
Brick2: 10.52.10.5:/gluster02/xen_bk_vol
Brick3: 10.52.10.6:/gluster01/xen_bk_vol
Brick4: 10.52.10.6:/gluster02/xen_bk_vol
Brick5: 10.52.10.7:/gluster01/xen_bk_vol
Brick6: 10.52.10.7:/gluster02/xen_bk_vol
Options Reconfigured:
network.ping-timeout: 5
auth.allow: 10.*,172.27.*
gluster>

I mount the volume in xenserver using :
"mount 10.52.10.6:xen_bk_vol /backup -t nfs -o proto=tcp,vers=3"
(My xenservers also are in the same subnet)
When I use "xe vm-export" (export vm to one single xva file in xenserver)  to export vms, I found something confused.  When the exported vm files exceed a single point, such as 100G or 200G(I don't known the exact number), the mount point could not write data anymore and the glusterfs server (the one I used to as mount point in xenserver, as 10.52.10.6) reboots very often. I cannot get information in the glusterfs logs.

Is my setting  wrong or it is a bug?

Thanks!

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120219/28d939a6/attachment.html>


More information about the Gluster-users mailing list