<div dir="ltr">It seems that it is no longer possible to stop, and delete a<br>GlusterFS volume, and then clean, and recreate it from the<br>component bricks, when the volume is &quot;Distributed-Replicate&quot;.<br>The same steps used with a simple &quot;Replicate&quot; or &quot;Distribute&quot;<br>volume are successful with the correct data.<br><br>For the failing &quot;Distributed-Replicate&quot; case, the &quot;rebalance&quot;<br>always fails after the recreate. The &quot;heal&quot; command is<br>successful, showing no &quot;info split-brain&quot; files indicated, but<br>about half of the files are missing, and there are many<br>&quot;(Possible split-brain)&quot; warnings in the logfile.<br><br>The gluster &quot;Distributed-Replicate&quot; volume &quot;recreate&quot;<br>procedure works fine in GlusterFS versions 3.2.7, and 3.4.2,<br>but not in glusterfs 3.6.5, 3.6.7, or 3.7.6.<br><br>Perhaps the &quot;recreate&quot; procedure has changed, or I&#39;m doing<br>something wrong that now matters in the newer GlusterFS<br>versions.<br><br>Details below. Any ideas how to make it work again?<br><br>Thanks.<br><br>~ Jeff Byers ~<br><br># glusterd -V<br>glusterfs 3.7.6 built on Dec 14 2015 07:05:12<br><br>################################################<br># Failing &quot;Distributed-Replicate&quot; recreate case.<br>################################################<br><br># mountpoint /exports/test-dir/<br>/exports/test-dir/ is a mountpoint<br># mount |grep test-dir<br>/dev/sdu on /exports/test-dir type xfs (rw,noatime,nodiratime,barrier,nouuid,inode64,logbufs=8,logbsize=256k)<br><br># mkdir /exports/test-dir/test-brick-1a<br># mkdir /exports/test-dir/test-brick-1b<br># mkdir /exports/test-dir/test-brick-2a<br># mkdir /exports/test-dir/test-brick-2b<br><br># gluster volume create test-replica-dist replica 2 transport tcp 10.10.60.169:/exports/test-dir/test-brick-1a 10.10.60.169:/exports/test-dir/test-brick-2a 10.10.60.169:/exports/test-dir/test-brick-1b 10.10.60.169:/exports/test-dir/test-brick-2b force<br>volume create: test-replica-dist: success: please start the volume to access data<br># gluster volume start test-replica-dist<br>volume start: test-replica-dist: success<br><br># gluster volume info test-replica-dist<br>Volume Name: test-replica-dist<br>Type: Distributed-Replicate<br>Volume ID: c8de4e65-2304-4801-a244-6511f39fc0c9<br>Status: Started<br>Number of Bricks: 2 x 2 = 4<br>Transport-type: tcp<br>Bricks:<br>Brick1: 10.10.60.169:/exports/test-dir/test-brick-1a<br>Brick2: 10.10.60.169:/exports/test-dir/test-brick-2a<br>Brick3: 10.10.60.169:/exports/test-dir/test-brick-1b<br>Brick4: 10.10.60.169:/exports/test-dir/test-brick-2b<br>Options Reconfigured:<br>snap-activate-on-create: enable<br><br># mkdir /mnt/test-replica-dist<br># mount -t glusterfs -o acl,log-level=WARNING 127.0.0.1:/test-replica-dist /mnt/test-replica-dist/<br><br># cp -rf /lib64/ /mnt/test-replica-dist/<br># diff -r /lib64/ /mnt/test-replica-dist/lib64/<br><br># umount /mnt/test-replica-dist<br># gluster volume stop test-replica-dist<br>Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y<br>volume stop: test-replica-dist: success<br># gluster volume delete test-replica-dist<br>Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y<br>volume delete: test-replica-dist: success<br><br># gluster_clear_xattrs.sh /exports/test-dir/test-brick-1a<br>removing all .glusterfs directories in progress: /exports/test-dir/test-brick-1a<br>xattr clean-up in progress: /exports/test-dir/test-brick-1a<br>/exports/test-dir/test-brick-1a ready to be used as a glusterfs brick<br># gluster_clear_xattrs.sh /exports/test-dir/test-brick-1b<br>removing all .glusterfs directories in progress: /exports/test-dir/test-brick-1b<br>xattr clean-up in progress: /exports/test-dir/test-brick-1b<br>/exports/test-dir/test-brick-1b ready to be used as a glusterfs brick<br># gluster_clear_xattrs.sh /exports/test-dir/test-brick-2a<br>removing all .glusterfs directories in progress: /exports/test-dir/test-brick-2a<br>xattr clean-up in progress: /exports/test-dir/test-brick-2a<br>/exports/test-dir/test-brick-2a ready to be used as a glusterfs brick<br># gluster_clear_xattrs.sh /exports/test-dir/test-brick-2b<br>removing all .glusterfs directories in progress: /exports/test-dir/test-brick-2b<br>xattr clean-up in progress: /exports/test-dir/test-brick-2b<br>/exports/test-dir/test-brick-2b ready to be used as a glusterfs brick<br><br># gluster volume create test-replica-dist replica 2 transport tcp 10.10.60.169:/exports/test-dir/test-brick-1a 10.10.60.169:/exports/test-dir/test-brick-2a 10.10.60.169:/exports/test-dir/test-brick-1b 10.10.60.169:/exports/test-dir/test-brick-2b force<br>volume create: test-replica-dist: success: please start the volume to access data<br># gluster volume start test-replica-dist<br>volume start: test-replica-dist: success<br># mount -t glusterfs -o acl,log-level=WARNING 127.0.0.1:/test-replica-dist /mnt/test-replica-dist/<br># diff -r /lib64/ /mnt/test-replica-dist/lib64/<br>Only in /lib64/device-mapper: libdevmapper-event-lvm2thin.so<br>Only in /lib64/multipath: libcheckcciss_tur.so<br>Only in /lib64/multipath: libcheckemc_clariion.so<br>Only in /lib64/multipath: libcheckhp_sw.so<br>Only in /lib64/multipath: libprioconst.so<br>Only in /lib64/multipath: libpriordac.so<br>Only in /lib64/multipath: libprioweighted.so<br>Only in /lib64/rtkaio: <a href="http://librtkaio-2.12.so">librtkaio-2.12.so</a><br>Only in /lib64/rtkaio: librt.so.1<br>Only in /lib64/xtables: libip6t_ah.so<br>Only in /lib64/xtables: libip6t_dst.so<br>Only in /lib64/xtables: libip6t_eui64.so<br>Only in /lib64/xtables: libip6t_frag.so<br>Only in /lib64/xtables: libip6t_HL.so<br>Only in /lib64/xtables: libip6t_icmp6.so<br>Only in /lib64/xtables: libip6t_LOG.so<br>Only in /lib64/xtables: libip6t_mh.so<br>Only in /lib64/xtables: libip6t_REJECT.so<br>Only in /lib64/xtables: libip6t_set.so<br>Only in /lib64/xtables: libipt_ah.so<br>Only in /lib64/xtables: libipt_ecn.so<br>Only in /lib64/xtables: libipt_ECN.so<br>Only in /lib64/xtables: libipt_icmp.so<br>Only in /lib64/xtables: libipt_MIRROR.so<br>Only in /lib64/xtables: libipt_realm.so<br>Only in /lib64/xtables: libipt_REDIRECT.so<br>Only in /lib64/xtables: libipt_REJECT.so<br>Only in /lib64/xtables: libipt_SAME.so<br>Only in /lib64/xtables: libipt_SET.so<br>Only in /lib64/xtables: libipt_SNAT.so<br>Only in /lib64/xtables: libipt_ttl.so<br>Only in /lib64/xtables: libipt_ULOG.so<br>Only in /lib64/xtables: libipt_unclean.so<br>Only in /lib64/xtables: libxt_CHECKSUM.so<br>Only in /lib64/xtables: libxt_cluster.so<br>Only in /lib64/xtables: libxt_connbytes.so<br>Only in /lib64/xtables: libxt_connlimit.so<br>Only in /lib64/xtables: libxt_CONNMARK.so<br>Only in /lib64/xtables: libxt_CONNSECMARK.so<br>Only in /lib64/xtables: libxt_conntrack.so<br>Only in /lib64/xtables: libxt_dccp.so<br>Only in /lib64/xtables: libxt_dscp.so<br>Only in /lib64/xtables: libxt_iprange.so<br>Only in /lib64/xtables: libxt_length.so<br>Only in /lib64/xtables: libxt_limit.so<br>Only in /lib64/xtables: libxt_mac.so<br>Only in /lib64/xtables: libxt_multiport.so<br>Only in /lib64/xtables: libxt_osf.so<br>Only in /lib64/xtables: libxt_physdev.so<br>Only in /lib64/xtables: libxt_pkttype.so<br>Only in /lib64/xtables: libxt_policy.so<br>Only in /lib64/xtables: libxt_quota.so<br>Only in /lib64/xtables: libxt_RATEEST.so<br>Only in /lib64/xtables: libxt_sctp.so<br>Only in /lib64/xtables: libxt_SECMARK.so<br>Only in /lib64/xtables: libxt_socket.so<br>Only in /lib64/xtables: libxt_statistic.so<br>Only in /lib64/xtables: libxt_string.so<br>Only in /lib64/xtables: libxt_tcpmss.so<br>Only in /lib64/xtables: libxt_TCPOPTSTRIP.so<br>Only in /lib64/xtables: libxt_time.so<br>Only in /lib64/xtables: libxt_TOS.so<br>Only in /lib64/xtables: libxt_TPROXY.so<br>Only in /lib64/xtables: libxt_TRACE.so<br>Only in /lib64/xtables: libxt_udp.so<br><br># gluster volume rebalance test-replica-dist start<br>volume rebalance: test-replica-dist: success: Rebalance on test-replica-dist has been started successfully. Use rebalance status command to check status of the rebalance process.<br>ID: ccf76757-c3df-4ae2-af2d-b82f8283d821<br><br># gluster volume rebalance test-replica-dist status<br>                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs<br>                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------<br>                               localhost                0        0Bytes           249             3             0               failed               0.00<br>volume rebalance: test-replica-dist: success<br><br># gluster volume heal test-replica-dist full<br>Launching heal operation to perform full self heal on volume test-replica-dist has been successful<br>Use heal info commands to check status<br># gluster volume heal test-replica-dist info split-brain<br>Brick 10.10.60.169:/exports/test-dir/test-brick-1a<br>Number of entries in split-brain: 0<br><br>Brick 10.10.60.169:/exports/test-dir/test-brick-2a<br>Number of entries in split-brain: 0<br><br>Brick 10.10.60.169:/exports/test-dir/test-brick-1b<br>Number of entries in split-brain: 0<br><br>Brick 10.10.60.169:/exports/test-dir/test-brick-2b<br>Number of entries in split-brain: 0<br><br># diff -r /lib64/ /mnt/test-replica-dist/lib64/<br>Only in /lib64/device-mapper: libdevmapper-event-lvm2thin.so<br>Only in /lib64/multipath: libcheckcciss_tur.so<br>Only in /lib64/multipath: libcheckemc_clariion.so<br>...<br>Only in /lib64/xtables: libxt_TRACE.so<br>Only in /lib64/xtables: libxt_udp.so<br><br># view /var/log/glusterfs/test-replica-dist-rebalance.log<br>[2015-12-14 23:06:25.432546] E [dht-rebalance.c:2949:gf_defrag_fix_layout] 0-test-replica-dist-dht: //.trashcan gfid not present<br>[2015-12-14 23:06:25.433196] I [MSGID: 109081] [dht-common.c:3810:dht_setxattr] 0-test-replica-dist-dht: fixing the layout of /lib64<br>[2015-12-14 23:06:25.433217] I [MSGID: 109045] [dht-selfheal.c:1509:dht_fix_layout_of_directory] 0-test-replica-dist-dht: subvolume 0 (test-replica-dist-replicate-0): 1014 chunks<br>[2015-12-14 23:06:25.433228] I [MSGID: 109045] [dht-selfheal.c:1509:dht_fix_layout_of_directory] 0-test-replica-dist-dht: subvolume 1 (test-replica-dist-replicate-1): 1014 chunks<br>[2015-12-14 23:06:25.434564] I [dht-rebalance.c:2446:gf_defrag_process_dir] 0-test-replica-dist-dht: migrate data called on /lib64<br>[2015-12-14 23:06:25.562584] I [dht-rebalance.c:2656:gf_defrag_process_dir] 0-test-replica-dist-dht: Migration operation on dir /lib64 took 0.13 secs<br>[2015-12-14 23:06:25.568177] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-2: remote operation failed. Path: /lib64/xtables (0e2a4e5c-7c8b-4f8b-979a-9dbd73de6ecc) [No data available]<br>[2015-12-14 23:06:25.568206] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-3: remote operation failed. Path: /lib64/xtables (0e2a4e5c-7c8b-4f8b-979a-9dbd73de6ecc) [No data available]<br>[2015-12-14 23:06:25.568557] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-3: remote operation failed. Path: (null) (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:06:25.568581] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-2: remote operation failed. Path: (null) (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:06:25.569942] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-2: remote operation failed. Path: (null) (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:06:25.569964] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-3: remote operation failed. Path: (null) (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:06:25.570117] I [MSGID: 109063] [dht-layout.c:702:dht_layout_normalize] 0-test-replica-dist-dht: Found anomalies in /lib64/xtables (gfid = 0e2a4e5c-7c8b-4f8b-979a-9dbd73de6ecc). Holes=1 overlaps=0<br>[2015-12-14 23:06:25.570142] W [MSGID: 109005] [dht-selfheal.c:1805:dht_selfheal_directory] 0-test-replica-dist-dht: Directory selfheal failed : 1 subvolumes have unrecoverable errors. path = /lib64/xtables, gfid = 0e2a4e5c-7c8b-4f8b-979a-9dbd73de6ecc<br>[2015-12-14 23:06:25.570179] I [MSGID: 109081] [dht-common.c:3810:dht_setxattr] 0-test-replica-dist-dht: fixing the layout of /lib64/xtables<br>[2015-12-14 23:06:25.570206] I [MSGID: 109045] [dht-selfheal.c:1509:dht_fix_layout_of_directory] 0-test-replica-dist-dht: subvolume 0 (test-replica-dist-replicate-0): 1014 chunks<br>[2015-12-14 23:06:25.570219] I [MSGID: 109045] [dht-selfheal.c:1509:dht_fix_layout_of_directory] 0-test-replica-dist-dht: subvolume 1 (test-replica-dist-replicate-1): 1014 chunks<br>[2015-12-14 23:06:25.570889] E [dht-rebalance.c:2992:gf_defrag_fix_layout] 0-test-replica-dist-dht: Setxattr failed for /lib64/xtables<br>[2015-12-14 23:06:25.571049] E [MSGID: 109016] [dht-rebalance.c:3006:gf_defrag_fix_layout] 0-test-replica-dist-dht: Fix layout failed for /lib64<br>[2015-12-14 23:06:25.571182] I [dht-rebalance.c:2085:gf_defrag_task] 0-DHT: Thread wokeup. defrag-&gt;current_thread_count: 5<br>[2015-12-14 23:06:25.571255] I [dht-rebalance.c:2085:gf_defrag_task] 0-DHT: Thread wokeup. defrag-&gt;current_thread_count: 6<br>[2015-12-14 23:06:25.571281] I [dht-rebalance.c:2085:gf_defrag_task] 0-DHT: Thread wokeup. defrag-&gt;current_thread_count: 7<br>[2015-12-14 23:06:25.571302] I [dht-rebalance.c:2085:gf_defrag_task] 0-DHT: Thread wokeup. defrag-&gt;current_thread_count: 8<br>[2015-12-14 23:06:25.571627] I [MSGID: 109028] [dht-rebalance.c:3485:gf_defrag_status_get] 0-test-replica-dist-dht: Rebalance is failed. Time taken is 0.00 secs<br>[2015-12-14 23:06:25.571647] I [MSGID: 109028] [dht-rebalance.c:3489:gf_defrag_status_get] 0-test-replica-dist-dht: Files migrated: 0, size: 0, lookups: 249, failures: 3, skipped: 0<br><br># view /var/log/glusterfs/mnt-test-replica-dist-.log<br>[2015-12-14 23:05:03.797439] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-3: remote operation failed. Path: (null) (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:05:03.800523] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-2: remote operation failed. Path: /lib64/xtables-1.4.7/libxt_TPROXY.so (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:05:03.800626] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-3: remote operation failed. Path: /lib64/xtables-1.4.7/libxt_TPROXY.so (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:05:03.802003] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-2: remote operation failed. Path: (null) (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:05:03.802100] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-3: remote operation failed. Path: (null) (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:05:03.804886] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-2: remote operation failed. Path: /lib64/xtables-1.4.7/libxt_TRACE.so (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:05:03.804989] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-3: remote operation failed. Path: /lib64/xtables-1.4.7/libxt_TRACE.so (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:05:03.806342] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-2: remote operation failed. Path: (null) (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:05:03.806477] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-3: remote operation failed. Path: (null) (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:05:03.809396] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-0: remote operation failed. Path: /lib64/xtables-1.4.7/libxt_u32.so (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:05:03.809430] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-1: remote operation failed. Path: /lib64/xtables-1.4.7/libxt_u32.so (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:05:03.810841] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-0: remote operation failed. Path: (null) (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:05:03.810905] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-1: remote operation failed. Path: (null) (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:05:03.813705] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-2: remote operation failed. Path: /lib64/xtables-1.4.7/libxt_udp.so (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:05:03.813824] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-3: remote operation failed. Path: /lib64/xtables-1.4.7/libxt_udp.so (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:05:03.815201] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-2: remote operation failed. Path: (null) (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:05:03.815295] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-3: remote operation failed. Path: (null) (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:05:03.818854] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-0: remote operation failed. Path: /lib64/ZIPScanner.so (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:05:03.818895] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-1: remote operation failed. Path: /lib64/ZIPScanner.so (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:05:03.820314] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-0: remote operation failed. Path: (null) (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:05:03.820342] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-1: remote operation failed. Path: (null) (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:05:03.821992] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-0: remote operation failed. Path: (null) (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:05:03.822001] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-1: remote operation failed. Path: (null) (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:05:03.822207] W [fuse-resolve.c:65:fuse_resolve_entry_cbk] 0-fuse: 056bce17-a4c2-4e13-a352-40f783c4804a/ZIPScanner.so: failed to resolve (No data available)<br>[2015-12-14 23:05:03.822574] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-0: remote operation failed. Path: /lib64/ZIPScanner.so (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:05:03.822600] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-1: remote operation failed. Path: /lib64/ZIPScanner.so (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:05:03.824000] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-0: remote operation failed. Path: (null) (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:05:03.824076] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-1: remote operation failed. Path: (null) (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:05:01.386125] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-3: remote operation failed. Path: /lib64 (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:05:01.386151] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-2: remote operation failed. Path: /lib64 (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:05:01.411283] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-3: remote operation failed. Path: /lib64/CFBScanner.so (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:07:38.979973] W [MSGID: 108008] [afr-read-txn.c:250:afr_read_txn] 0-test-replica-dist-replicate-1: Unreadable subvolume -1 found with event generation 2 for gfid 8a2836d2-dd2c-46dc-9a1e-437c5444a704. (Possible split-brain)<br>[2015-12-14 23:07:38.983422] W [MSGID: 108008] [afr-read-txn.c:250:afr_read_txn] 0-test-replica-dist-replicate-1: Unreadable subvolume -1 found with event generation 2 for gfid 43c8e284-1cd0-48a8-b8e5-c075092eeaa7. (Possible split-brain)<br>[2015-12-14 23:07:39.031276] W [MSGID: 109011] [dht-layout.c:191:dht_layout_search] 0-test-replica-dist-dht: no subvolume for hash (value) = 827357797<br>[2015-12-14 23:07:39.031587] W [fuse-bridge.c:462:fuse_entry_cbk] 0-glusterfs-fuse: 3874: LOOKUP() /lib64/device-mapper/libdevmapper-event-lvm2thin.so =&gt; -1 (Stale file handle)<br>[2015-12-14 23:07:39.032043] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-2: remote operation failed. Path: /lib64/device-mapper (43c8e284-1cd0-48a8-b8e5-c075092eeaa7) [No data available]<br>[2015-12-14 23:07:39.032090] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-3: remote operation failed. Path: /lib64/device-mapper (43c8e284-1cd0-48a8-b8e5-c075092eeaa7) [No data available]<br>[2015-12-14 23:07:39.033510] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-2: remote operation failed. Path: (null) (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:07:39.033523] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-3: remote operation failed. Path: (null) (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:07:39.034759] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-2: remote operation failed. Path: (null) (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:07:39.034790] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-test-replica-dist-client-3: remote operation failed. Path: (null) (00000000-0000-0000-0000-000000000000) [No data available]<br>[2015-12-14 23:07:39.035335] W [fuse-bridge.c:462:fuse_entry_cbk] 0-glusterfs-fuse: 3876: LOOKUP() /lib64/device-mapper/libdevmapper-event-lvm2thin.so =&gt; -1 (Stale file handle)<br>[2015-12-14 23:07:39.037553] W [fuse-bridge.c:462:fuse_entry_cbk] 0-glusterfs-fuse: 3881: LOOKUP() /lib64/device-mapper/libdevmapper-event-lvm2thin.so =&gt; -1 (Stale file handle)<br>[2015-12-14 23:07:39.037976] W [fuse-bridge.c:462:fuse_entry_cbk] 0-glusterfs-fuse: 3883: LOOKUP() /lib64/device-mapper/libdevmapper-event-lvm2thin.so =&gt; -1 (Stale file handle)<br>The message &quot;W [MSGID: 109011] [dht-layout.c:191:dht_layout_search] 0-test-replica-dist-dht: no subvolume for hash (value) = 827357797&quot; repeated 3 times between [2015-12-14 23:07:39.031276] and [2015-12-14 23:07:39.037686]<br>[2015-12-14 23:07:39.190729] W [MSGID: 109011] [dht-layout.c:191:dht_layout_search] 0-test-replica-dist-dht: no subvolume for hash (value) = 3553706931<br>[2015-12-14 23:07:39.190781] W [MSGID: 109011] [dht-layout.c:191:dht_layout_search] 0-test-replica-dist-dht: no subvolume for hash (value) = 3781782680<br>[2015-12-14 23:07:39.190990] W [MSGID: 108008] [afr-read-txn.c:250:afr_read_txn] 0-test-replica-dist-replicate-1: Unreadable subvolume -1 found with event generation 2 for gfid eeab01c5-af5f-49f8-bd06-01471f405c84. (Possible split-brain)<br>[2015-12-14 23:07:39.214980] W [MSGID: 108008] [afr-read-txn.c:250:afr_read_txn] 0-test-replica-dist-replicate-1: Unreadable subvolume -1 found with event generation 2 for gfid fe99280a-10f4-4e2f-8483-99d63461fa9e. (Possible split-brain)<br>[2015-12-14 23:07:39.227809] W [MSGID: 109011] [dht-layout.c:191:dht_layout_search] 0-test-replica-dist-dht: no subvolume for hash (value) = 3553706931<br>[2015-12-14 23:07:39.227837] W [MSGID: 109011] [dht-layout.c:191:dht_layout_search] 0-test-replica-dist-dht: no subvolume for hash (value) = 3781782680<br>[2015-12-14 23:07:39.228015] W [MSGID: 108008] [afr-read-txn.c:250:afr_read_txn] 0-test-replica-dist-replicate-1: Unreadable subvolume -1 found with event generation 2 for gfid 862461d1-38ef-4f3a-8216-c9d9dedde1af. (Possible split-brain)<br>[2015-12-14 23:07:39.264828] W [MSGID: 109011] [dht-layout.c:191:dht_layout_search] 0-test-replica-dist-dht: no subvolume for hash (value) = 3553706931<br>[2015-12-14 23:07:39.264862] W [MSGID: 109011] [dht-layout.c:191:dht_layout_search] 0-test-replica-dist-dht: no subvolume for hash (value) = 3781782680<br>[2015-12-14 23:07:39.266617] W [MSGID: 108008] [afr-read-txn.c:250:afr_read_txn] 0-test-replica-dist-replicate-1: Unreadable subvolume -1 found with event generation 2 for gfid 0e2a4e5c-7c8b-4f8b-979a-9dbd73de6ecc. (Possible split-brain<br><br>##########################################<br># Successful &quot;Distributed&quot; recreate case.<br>##########################################<br><br># mkdir /exports/test-dir/test-brick-1<br># mkdir /exports/test-dir/test-brick-2<br><br># gluster volume create test-dist transport tcp 10.10.60.169:/exports/test-dir/test-brick-1 10.10.60.169:/exports/test-dir/test-brick-2<br>volume create: test-dist: success: please start the volume to access data<br># gluster volume start test-dist<br>volume start: test-dist: success<br><br># gluster volume info test-dist<br>Volume Name: test-dist<br>Type: Distribute<br>Volume ID: 385a8546-1776-45be-8ae4-cd94ed37f2a5<br>Status: Started<br>Number of Bricks: 2<br>Transport-type: tcp<br>Bricks:<br>Brick1: 10.10.60.169:/exports/test-dir/test-brick-1<br>Brick2: 10.10.60.169:/exports/test-dir/test-brick-2<br>Options Reconfigured:<br>snap-activate-on-create: enable<br><br># mkdir /mnt/test-dist<br># mount -t glusterfs -o acl,log-level=WARNING 127.0.0.1:/test-dist /mnt/test-dist/<br><br># cp -rf /lib64/ /mnt/test-dist/<br># diff -r /lib64/ /mnt/test-dist/lib64/<br><br># umount /mnt/test-dist/<br># gluster volume stop test-dist<br>Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y<br>volume stop: test-dist: success<br># gluster volume delete test-dist<br>Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y<br>volume delete: test-dist: success<br><br># gluster_clear_xattrs.sh /exports/test-dir/test-brick-1<br>removing all .glusterfs directories in progress: /exports/test-dir/test-brick-1<br>xattr clean-up in progress: /exports/test-dir/test-brick-1<br>/exports/test-dir/test-brick-1 ready to be used as a glusterfs brick<br># gluster_clear_xattrs.sh /exports/test-dir/test-brick-2<br>removing all .glusterfs directories in progress: /exports/test-dir/test-brick-2<br>xattr clean-up in progress: /exports/test-dir/test-brick-2<br>/exports/test-dir/test-brick-2 ready to be used as a glusterfs brick<br><br># gluster volume create test-dist transport tcp 10.10.60.169:/exports/test-dir/test-brick-1 10.10.60.169:/exports/test-dir/test-brick-2<br>volume create: test-dist: success: please start the volume to access data<br># gluster volume start test-dist<br>volume start: test-dist: success<br># mount -t glusterfs -o acl,log-level=WARNING 127.0.0.1:/test-dist /mnt/test-dist/<br># diff -r /lib64/ /mnt/test-dist/lib64/<br><br># gluster volume rebalance test-dist start<br>volume rebalance: test-dist: success: Rebalance on test-dist has been started successfully. Use rebalance status command to check status of the rebalance process.<br>ID: 63163e88-0b81-40cf-9050-4af12bf31acd<br># gluster volume rebalance test-dist status<br>                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs<br>                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------<br>                               localhost                0        0Bytes           537             0             0            completed               1.00<br>volume rebalance: test-dist: success<br><br>##########################################<br># Successful &quot;Replicate&quot; recreate case.<br>##########################################<br><br># mkdir /exports/test-dir/test-brick-3a<br># mkdir /exports/test-dir/test-brick-3b<br># gluster volume create test-replica replica 2 transport tcp 10.10.60.169:/exports/test-dir/test-brick-3a 10.10.60.169:/exports/test-dir/test-brick-3b force                                                       volume create: test-replica: success: please start the volume to access data<br># gluster volume start test-replica<br>volume start: test-replica: success<br><br># gluster volume info test-replica<br>Volume Name: test-replica<br>Type: Replicate<br>Volume ID: 1e66af41-a29f-45ba-b25d-b4b16d2a66d9<br>Status: Started<br>Number of Bricks: 1 x 2 = 2<br>Transport-type: tcp<br>Bricks:<br>Brick1: 10.10.60.169:/exports/test-dir/test-brick-3a<br>Brick2: 10.10.60.169:/exports/test-dir/test-brick-3b<br>Options Reconfigured:<br>snap-activate-on-create: enable<br><br># mkdir /mnt/test-replica<br># mount -t glusterfs -o acl,log-level=WARNING 127.0.0.1:/test-replica /mnt/test-replica<br># cp -rf /lib64/ /mnt/test-replica/<br># diff -r /lib64/ /mnt/test-replica/lib64/<br><br># umount /mnt/test-replica<br># gluster volume stop test-replica<br>Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y<br>volume stop: test-replica: success<br># gluster volume delete test-replica<br>Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y<br>volume delete: test-replica: success<br><br># gluster_clear_xattrs.sh /exports/test-dir/test-brick-3a<br>removing all .glusterfs directories in progress: /exports/test-dir/test-brick-3a<br>xattr clean-up in progress: /exports/test-dir/test-brick-3a<br>/exports/test-dir/test-brick-3a ready to be used as a glusterfs brick<br># gluster_clear_xattrs.sh /exports/test-dir/test-brick-3b<br>removing all .glusterfs directories in progress: /exports/test-dir/test-brick-3b<br>xattr clean-up in progress: /exports/test-dir/test-brick-3b<br>/exports/test-dir/test-brick-3b ready to be used as a glusterfs brick<br><br># gluster volume create test-replica replica 2 transport tcp 10.10.60.169:/exports/test-dir/test-brick-3a 10.10.60.169:/exports/test-dir/test-brick-3b force<br>volume create: test-replica: success: please start the volume to access data<br># gluster volume start test-replica<br>volume start: test-replica: success<br><br># mount -t glusterfs -o acl,log-level=WARNING 127.0.0.1:/test-replica /mnt/test-replica<br># diff -r /lib64/ /mnt/test-replica/lib64/<br><br></div>