<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Your ssh commands connect to port 2503 - is that port listening on
the slaves?<br>
Does it use privilege-separation?<br>
<div class="moz-signature">
<style type="text/css"> body {font-family: sans-serif;}</style><br>
Don't force it to changelog without an initial sync using xsync.<br>
<br>
The warning "fuse: xlator does not implement release_cbk" was
fixed in 3.6.0alpha1 but looks like it could easily be backported
<a class="moz-txt-link-freetext" href="https://github.com/gluster/glusterfs/commit/bca9eab359710eb3b826c6441126e2e56f774df5">https://github.com/gluster/glusterfs/commit/bca9eab359710eb3b826c6441126e2e56f774df5</a><br>
<br>
</div>
<div class="moz-cite-prefix">On 11/11/2015 3:20 AM, Dietmar Putz
wrote:<br>
</div>
<blockquote cite="mid:56422772.4050005@3qmedien.net" type="cite">Hi
all,
<br>
<br>
i need some help with a geo-replication issue...
<br>
recently i upgraded two 6-node distributed-replicated gluster from
ubuntu 12.04.5 lts to 14.04.3 lts resp. glusterfs 3.4.7 to 3.5.6
<br>
since then the geo-replication does not start syncing but remains
as shown in the 'status detail' output below for about 48h.
<br>
<br>
I followed the hints for upgrade with an existing geo-replication
:
<br>
<a class="moz-txt-link-freetext" href="http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.5">http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.5</a>
<br>
<br>
the master_gfid_file.txt was created and applied to the slave
volume. geo-replication was started with 'force' option.
<br>
in the gluster.log on the slave i can find thousands of lines with
messages like :
<br>
".../.gfid/1abb953b-aa9d-4fa3-9a72-415204057572 => -1
(Operation not permitted)"
<br>
and no files are synced.
<br>
<br>
I'm not sure whats going on and since there are about 40TByte of
data already replicated by the old 3.4.7 setup I have some fear to
try around...
<br>
so i got some questions...maybe somebody can give me some hints...
<br>
<br>
1. as shown in the example below the trusted.gfid of the same file
differs in master and slave volume. as far as i understood the
upgrade-howto after applying the master_gfid_file.txt on the slave
they should be the same on master and slave...is that right ?
<br>
2. as shown in the config below the change_detector is 'xsync'.
Somewhere i red that xsync is used for the initial replication and
is changing to 'change_log' later on when the entire sync is done.
should i try to modify the change_detector to 'change_log', does
it make sense...?
<br>
<br>
any other idea which could help me to solve this problem....?
<br>
<br>
best regards
<br>
dietmar
<br>
<br>
<br>
<br>
<br>
[ 11:10:01 ] - root@gluster-ger-ber-09 ~ $glusterfs --version
<br>
glusterfs 3.5.6 built on Sep 16 2015 15:27:30
<br>
...
<br>
[ 11:11:37 ] - root@gluster-ger-ber-09 ~ $cat
/var/lib/glusterd/glusterd.info | grep operating-version
<br>
operating-version=30501
<br>
<br>
<br>
[ 10:55:35 ] - root@gluster-ger-ber-09 ~ $gluster volume
geo-replication ger-ber-01 <a class="moz-txt-link-freetext" href="ssh://gluster-wien-02::aut-wien-01">ssh://gluster-wien-02::aut-wien-01</a>
status detail
<br>
<br>
MASTER NODE MASTER VOL MASTER BRICK
SLAVE STATUS CHECKPOINT
STATUS CRAWL STATUS FILES SYNCD FILES PENDING BYTES
PENDING DELETES PENDING FILES SKIPPED
<br>
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
<br>
gluster-ger-ber-09 ger-ber-01 /gluster-export
gluster-wien-05-int::aut-wien-01 Active N/A
Hybrid Crawl 0 8191 0 0 0
<br>
gluster-ger-ber-11 ger-ber-01 /gluster-export
<a class="moz-txt-link-freetext" href="ssh://gluster-wien-02::aut-wien-01">ssh://gluster-wien-02::aut-wien-01</a> Not Started
N/A N/A N/A N/A
N/A N/A N/A
<br>
gluster-ger-ber-10 ger-ber-01 /gluster-export
<a class="moz-txt-link-freetext" href="ssh://gluster-wien-02::aut-wien-01">ssh://gluster-wien-02::aut-wien-01</a> Not Started
N/A N/A N/A N/A
N/A N/A N/A
<br>
gluster-ger-ber-12 ger-ber-01 /gluster-export
<a class="moz-txt-link-freetext" href="ssh://gluster-wien-02::aut-wien-01">ssh://gluster-wien-02::aut-wien-01</a> Not Started
N/A N/A N/A N/A
N/A N/A N/A
<br>
gluster-ger-ber-07 ger-ber-01 /gluster-export
<a class="moz-txt-link-freetext" href="ssh://gluster-wien-02::aut-wien-01">ssh://gluster-wien-02::aut-wien-01</a> Not Started
N/A N/A N/A N/A
N/A N/A N/A
<br>
gluster-ger-ber-08 ger-ber-01 /gluster-export
gluster-wien-04-int::aut-wien-01 Passive N/A
N/A 0 0 0 0 0
<br>
[ 10:55:48 ] - root@gluster-ger-ber-09 ~ $
<br>
<br>
<br>
[ 10:56:56 ] - root@gluster-ger-ber-09 ~ $gluster volume
geo-replication ger-ber-01 <a class="moz-txt-link-freetext" href="ssh://gluster-wien-02::aut-wien-01">ssh://gluster-wien-02::aut-wien-01</a>
config
<br>
special_sync_mode: partial
<br>
state_socket_unencoded:
/var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-02_aut-wien-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-01.socket<br>
gluster_log_file:
/var/log/glusterfs/geo-replication/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-01.gluster.log<br>
ssh_command: ssh -p 2503 -oPasswordAuthentication=no
-oStrictHostKeyChecking=no -i
/var/lib/glusterd/geo-replication/secret.pem
<br>
ignore_deletes: true
<br>
change_detector: xsync
<br>
ssh_command_tar: ssh -p 2503 -oPasswordAuthentication=no
-oStrictHostKeyChecking=no -i
/var/lib/glusterd/geo-replication/tar_ssh.pem
<br>
working_dir:
/var/run/gluster/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-01<br>
remote_gsyncd: /nonexistent/gsyncd
<br>
log_file:
/var/log/glusterfs/geo-replication/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-01.log<br>
socketdir: /var/run
<br>
state_file:
/var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-02_aut-wien-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-01.status<br>
state_detail_file:
/var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-02_aut-wien-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-01-detail.status<br>
session_owner: 6a071cfa-b150-4f0b-b1ed-96ab5d4bd671
<br>
gluster_command_dir: /usr/sbin/
<br>
pid_file:
/var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-02_aut-wien-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-01.pid<br>
georep_session_working_dir:
/var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-02_aut-wien-01/<br>
gluster_params: aux-gfid-mount
<br>
volume_id: 6a071cfa-b150-4f0b-b1ed-96ab5d4bd671
<br>
[ 11:10:01 ] - root@gluster-ger-ber-09 ~ $
<br>
<br>
<br>
<br>
[ 12:45:34 ] - root@gluster-wien-05
/var/log/glusterfs/geo-replication-slaves $tail -f
6a071cfa-b150-4f0b-b1ed-96ab5d4bd671\:gluster%3A%2F%2F127.0.0.1%3Aaut-wien-01.gluster.log<br>
[2015-11-10 12:59:16.097932] W
[fuse-bridge.c:1942:fuse_create_cbk] 0-glusterfs-fuse: 54267:
/.gfid/1abb953b-aa9d-4fa3-9a72-415204057572 => -1 (Operation
not permitted)
<br>
[2015-11-10 12:59:16.098044] W [defaults.c:1381:default_release]
(-->/usr/lib/x86_64-linux-gnu/glusterfs/3.5.6/xlator/mount/fuse.so(+0xfb4d)
[0x7fc9cd104b4d]
(-->/usr/lib/x86_64-linux-gnu/glusterfs/3.5.6/xlator/mount/fuse.so(free_fuse_state+0x85)
[0x7fc9cd0fab95]
(-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(fd_unref+0x10e)
[0x7fc9cf52ec9e]))) 0-fuse: xlator does not implement release_cbk
<br>
...
<br>
<br>
<br>
grep 1abb953b-aa9d-4fa3-9a72-415204057572 master_gfid_file.txt
<br>
1abb953b-aa9d-4fa3-9a72-415204057572 1050/hyve/364/14158.mp4
<br>
<br>
putz@sdn-de-gate-01:~/central$ ./mycommand.sh -H
gluster-ger,gluster-wien -c "getfattr -m . -d -e hex
/gluster-export/1050/hyve/364/14158.mp4"
<br>
...
<br>
master volume :
<br>
-----------------------------------------------------
<br>
Host : gluster-ger-ber-09-int
<br>
# file: gluster-export/1050/hyve/364/14158.mp4
<br>
trusted.afr.ger-ber-01-client-6=0x000000000000000000000000
<br>
trusted.afr.ger-ber-01-client-7=0x000000000000000000000000
<br>
trusted.gfid=0x1abb953baa9d4fa39a72415204057572
<br>
trusted.glusterfs.6a071cfa-b150-4f0b-b1ed-96ab5d4bd671.xtime=0x54bff5c40008dd7f
<br>
-----------------------------------------------------
<br>
Host : gluster-ger-ber-10-int
<br>
# file: gluster-export/1050/hyve/364/14158.mp4
<br>
trusted.afr.ger-ber-01-client-6=0x000000000000000000000000
<br>
trusted.afr.ger-ber-01-client-7=0x000000000000000000000000
<br>
trusted.gfid=0x1abb953baa9d4fa39a72415204057572
<br>
trusted.glusterfs.6a071cfa-b150-4f0b-b1ed-96ab5d4bd671.xtime=0x54bff5c40008dd7f
<br>
...
<br>
slave volume :
<br>
Host : gluster-wien-04
<br>
# file: gluster-export/1050/hyve/364/14158.mp4
<br>
trusted.afr.aut-wien-01-client-2=0x000000000000000000000000
<br>
trusted.afr.aut-wien-01-client-3=0x000000000000000000000000
<br>
trusted.gfid=0x129ba62c3d214b34beb366fb1e2c8e4b
<br>
trusted.glusterfs.6a071cfa-b150-4f0b-b1ed-96ab5d4bd671.xtime=0x54bff5c40008dd7f
<br>
-----------------------------------------------------
<br>
Host : gluster-wien-05
<br>
# file: gluster-export/1050/hyve/364/14158.mp4
<br>
trusted.afr.aut-wien-01-client-2=0x000000000000000000000000
<br>
trusted.afr.aut-wien-01-client-3=0x000000000000000000000000
<br>
trusted.gfid=0x129ba62c3d214b34beb366fb1e2c8e4b
<br>
trusted.glusterfs.6a071cfa-b150-4f0b-b1ed-96ab5d4bd671.xtime=0x54bff5c40008dd7f
<br>
-----------------------------------------------------
<br>
...
<br>
putz@sdn-de-gate-01:~/central$
<br>
<br>
<br>
</blockquote>
<br>
</body>
</html>