[Gluster-users] [Gluster 3.2.1] Réplication issues on a two bricks volume

Julien Groselle julien.groselle at gmail.com
Tue Aug 2 06:41:21 UTC 2011


Hello,

Here are the logs we have just before replication failed :

[2011-08-02 08:32:33.549303] I [client3_1-fops.c:453:client3_1_readlink_cbk]
0-REP_SVG-client-1: remote operation failed: No such file or directory
[2011-08-02 08:32:33.549334] W [fuse-bridge.c:982:fuse_readlink_cbk]
0-glusterfs-fuse: 2169522:
/Sauvegarde/ecrevisse.coe.int/etc/20110720/rc3.d=> -1 (Transport
endpoint is not connected)
[2011-08-02 08:32:33.566909] I [client3_1-fops.c:453:client3_1_readlink_cbk]
0-REP_SVG-client-1: remote operation failed: No such file or directory
[2011-08-02 08:32:33.566937] W [fuse-bridge.c:982:fuse_readlink_cbk]
0-glusterfs-fuse: 2169654: /Sauvegarde/ecrevisse.coe.int/etc/20110720/rmt =>
-1 (Transport endpoint is not connected)
[2011-08-02 08:32:33.598883] I [client3_1-fops.c:453:client3_1_readlink_cbk]
0-REP_SVG-client-1: remote operation failed: No such file or directory
[2011-08-02 08:32:33.598910] W [fuse-bridge.c:982:fuse_readlink_cbk]
0-glusterfs-fuse: 2169919:
/Sauvegarde/ecrevisse.coe.int/etc/20110720/init.d=> -1 (Transport
endpoint is not connected)
[2011-08-02 08:33:40.925756] I [client3_1-fops.c:2132:client3_1_opendir_cbk]
0-REP_SVG-client-1: remote operation failed: No such file or directory
[2011-08-02 08:33:40.925809] W [fuse-bridge.c:582:fuse_fd_cbk]
0-glusterfs-fuse: 2419699: OPENDIR() /Sauvegarde/
taal.coe.int/usrshare/20110720/vim/vim72/autoload => -1 (No such file or
directory)
[2011-08-02 08:33:41.458567] I [fuse-bridge.c:3218:fuse_thread_proc] 0-fuse:
unmounting /etc/glusterd/mount/REP_SVG
[2011-08-02 08:33:41.473995] W [glusterfsd.c:712:cleanup_and_exit]
(-->/lib/libc.so.6(clone+0x6d) [0x7fed6a91d02d]
(-->/lib/libpthread.so.0(+0x68ba) [0x7fed6abb58ba]
(-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x40536d]))) 0-: received
signum (15), shutting down

# gluster volume rebalance REP_SVG status
rebalance failed

Do you have some solutions ?

Thank you in advance !

*Julien Groselle*


2011/8/1 Julien Groselle <julien.groselle at gmail.com>

> First of all, we have milions of this lines :
> [2011-08-01 10:30:30.155065] W [inode.c:1035:inode_path]
> 0-/storage/backup/inode: no dentry for non-root inode 392601459:
> ef33731f-7c5c-435e-b300-2ffe2ab1c155
> [2011-08-01 10:30:30.155767] W [inode.c:1035:inode_path]
> 0-/storage/backup/inode: no dentry for non-root inode 392601459:
> ef33731f-7c5c-435e-b300-2ffe2ab1c155
> [2011-08-01 10:30:30.156144] W [inode.c:1035:inode_path]
> 0-/storage/backup/inode: no dentry for non-root inode 392601459:
> ef33731f-7c5c-435e-b300-2ffe2ab1c155
> [2011-08-01 10:30:30.156641] W [inode.c:1035:inode_path]
> 0-/storage/backup/inode: no dentry for non-root inode 392601459:
> ef33731f-7c5c-435e-b300-2ffe2ab1c155
> [2011-08-01 10:30:30.159850] W [inode.c:1035:inode_path]
> 0-/storage/backup/inode: no dentry for non-root inode 392601459:
> ef33731f-7c5c-435e-b300-2ffe2ab1c155
> [2011-08-01 10:30:30.160271] W [inode.c:1035:inode_path]
> 0-/storage/backup/inode: no dentry for non-root inode 392601459:
> ef33731f-7c5c-435e-b300-2ffe2ab1c155
>
> What does it means ?
>
> And we have many errors "No such file or directory" like this one :
> [2011-08-01 10:37:23.342608] E [posix.c:1085:posix_readlink]
> 0-REP_SVG-posix: readlink on /Sauvegarde/
> vulcano.coe.int/usrshare/20110729/terminfo/v/vt102 failed: No such file or
> directory
>
> I hope i've helped.
>
> *Julien Groselle*
>
>
>
> 2011/8/1 Julien Groselle <julien.groselle at gmail.com>
>
>> I'm sorry, someone of my team has flushed the logs, and restart
>> rebalance...
>> I will give you logs as soon as possible, when the problem will come
>> again.
>>
>> You want me to create a distributed / replicated<http://www.gluster.com/community/documentation/index.php/Gluster_3.2:_Creating_Distributed_Replicated_Volumes>volume, right ?
>> I had used this command to create my volume :
>> # gluster volume create REP_SVG replica 2 transport tcp
>> server1:/storage/backup server2:/storage/backup
>>
>> What is the command to create distributed / replicated volume ?
>>
>> Thank you very much for your quick answer !
>>
>> *Julien Groselle*
>>
>>
>> 2011/8/1 Anand Avati <anand.avati at gmail.com>
>>
>>> Can you please provide logs for the errors you are facing? Also,
>>> rebalance was not the right operation to be done in  your situation. You
>>> don't seem to have a distributed setup (but a pure replicate instead) in
>>> which case rebalance is really not achieving anything for you.
>>>
>>> Avati
>>>
>>> On Mon, Aug 1, 2011 at 12:30 PM, Julien Groselle <
>>> julien.groselle at gmail.com> wrote:
>>>
>>>> Hello,
>>>>
>>>> I have installed GlusterFS one month ago, and replication have many
>>>> issues :
>>>> First of all, our infrastructure, 2 storage array of 8Tb in replication
>>>> mode... We have our backups file on this arrays, so 6Tb of datas.
>>>>
>>>> I want replicate datas on the second storrage array, so, i use this
>>>> command :
>>>> # gluster volume rebalance REP_SVG migrate-data start
>>>> And gluster start to replicate, in 2 weeks we had 2.6Yb of datas
>>>> replicated.
>>>> But now, replication fail after about one day of replication.... with
>>>> many errors.
>>>>
>>>> So i have two questions, are there any option or command to boost
>>>> replication ?
>>>> We have to continue to backup our servers... so during the réplication,
>>>> many files are rotated/moved/added.
>>>> Is it a problem for gluster to replicatde datas during a backup session
>>>> ?
>>>>
>>>> For now, we can't replicate any datas more ! We need help.
>>>>
>>>> FYI :
>>>> # gluster --version
>>>> glusterfs 3.2.1 built on Jun 12 2011 12:29:36
>>>> Repository revision: v3.2.1
>>>> Copyright (c) 2006-2010 Gluster Inc. <http://www.gluster.com>
>>>> GlusterFS comes with ABSOLUTELY NO WARRANTY.
>>>> You may redistribute copies of GlusterFS under the terms of the GNU
>>>> Affero General Public License.
>>>>
>>>> # uname -a
>>>> Linux toomba 2.6.32-5-amd64 #1 SMP Wed Jan 12 03:40:32 UTC 2011 x86_64
>>>> GNU/Linux
>>>>
>>>> # cat /etc/debian_version
>>>> 6.0.2
>>>>
>>>> # gluster peer status
>>>> Number of Peers: 1
>>>>
>>>> Hostname: kaiserstuhl-svg.coe.int
>>>> Uuid: 5b79b4bc-c8d2-48d4-bd43-37991197ab47
>>>> State: Peer in Cluster (Connected)
>>>>
>>>> # gluster volume info all
>>>>
>>>> Volume Name: REP_SVG
>>>> Type: Replicate
>>>> Status: Started
>>>> Number of Bricks: 2
>>>> Transport-type: tcp
>>>> Bricks:
>>>> Brick1: toomba-svg.coe.int:/storage/backup
>>>> Brick2: kaiserstuhl-svg.coe.int:/storage/backup
>>>> Options Reconfigured:
>>>> performance.write-behind-window-size: 16MB
>>>> performance.cache-size: 256MB
>>>> diagnostics.brick-log-level: WARNING
>>>>
>>>> *Julien Groselle*
>>>>
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110802/50a82bc7/attachment.html>


More information about the Gluster-users mailing list