<html>
  <head>
    <meta content="text/html; charset=utf-8" http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    Hi Milind,<br>
    <br>
    thanks again for your answer...<br>
    damn...i found an old mail from Venky Shankar and obviously i had a
    wrong view on ignore_deletes....<br>
    but<br>
    <tt>[ 20:42:53 ] - root@gluster-ger-ber-07  ~ $gluster volume
      geo-replication ger-ber-01 gluster-wien-07::aut-wien-vol-01 config
      ignore_deletes false</tt><tt><br>
    </tt><tt>Reserved option</tt><tt><br>
    </tt><tt>geo-replication command failed</tt><tt><br>
    </tt><tt>[ 20:43:06 ] - root@gluster-ger-ber-07  ~ $gluster volume
      geo-replication ger-ber-01 gluster-wien-07::aut-wien-vol-01 config
      ignore_deletes no</tt><tt><br>
    </tt><tt>Reserved option</tt><tt><br>
    </tt><tt>geo-replication command failed</tt><tt><br>
    </tt><tt>[ 20:43:11 ] - root@gluster-ger-ber-07  ~ </tt><tt><br>
    </tt>i stopped the geo-replication and tried it again...same result.<br>
    possibly i should start a new replication from scratch and set
    ignore_deletes to false before starting the replication.<br>
    <br>
    meanwhile the reason for the new 'operation not permitted' messages
    were found...<br>
    a directory on the master and a file on the slave in the same
    directory-level with the same name...<br>
    <br>
    best regards<br>
    dietmar<br>
    <br>
    <br>
    <br>
    <br>
    <br>
    <div class="moz-cite-prefix">Am 20.01.2016 um 19:28 schrieb Milind
      Changire:<br>
    </div>
    <blockquote
cite="mid:CANmksPRkzXdFk3QCBHBoWf7fh8j=qUVxL8TeW0FwJujk3u8JHg@mail.gmail.com"
      type="cite">
      <div dir="ltr">Dietmar,
        <div>I just looked at your very first post describing the
          problem and I found<br>
        </div>
        <div><br>
        </div>
        <div><span style="font-size:12.8px">ignore_deletes: true</span><br>
        </div>
        <div><span style="font-size:12.8px"><br>
          </span></div>
        <div><span style="font-size:12.8px">in the geo-replication
            config command output.</span></div>
        <div><span style="font-size:12.8px"><br>
          </span></div>
        <div><span style="font-size:12.8px">If you'd like the slave
            volume to replicate file deletion as well, then the
            "ignore_deletes" should be set to "false"</span></div>
        <div><span style="font-size:12.8px">That should help resolve the
            CREATE + DELETE + CREATE issue.</span></div>
        <div><span style="font-size:12.8px"><br>
          </span></div>
        <div><span style="font-size:12.8px">If this doesn't help, then
            strace output for gsyncd could be the savior.</span></div>
        <div><span style="font-size:12.8px"><br>
          </span></div>
        <div><span style="font-size:12.8px">-----</span></div>
        <div><span style="font-size:12.8px">To add further ...</span></div>
        <div><span style="font-size:12.8px">Lately we've stumbled across
            another issue with the CREATE + RENAME + CREATE sequence on
            geo-replication restart. Two fixes have been posted and are
            available for review upstream.</span></div>
        <div><span style="font-size:12.8px"><br>
          </span></div>
        <div>geo-rep: avoid creating multiple entries with same gfid -- <a
            moz-do-not-send="true"
            href="http://review.gluster.org/13186"><a class="moz-txt-link-freetext" href="http://review.gluster.org/13186">http://review.gluster.org/13186</a></a></div>
        <div>geo-rep: hard-link rename issues on changelog replay -- <a
            moz-do-not-send="true"
            href="http://review.gluster.org/13189"><a class="moz-txt-link-freetext" href="http://review.gluster.org/13189">http://review.gluster.org/13189</a></a><br>
        </div>
        <div><br>
        </div>
        <div>I'll post info about the fix propagation plan for the 3.6.x
          series later.</div>
        <div><br>
        </div>
        <div>--</div>
        <div>Milind</div>
        <div><br>
        </div>
        <div><span style="font-size:12.8px"><br>
          </span></div>
      </div>
      <div class="gmail_extra"><br>
        <div class="gmail_quote">On Wed, Jan 20, 2016 at 11:23 PM,
          Dietmar Putz <span dir="ltr">&lt;<a moz-do-not-send="true"
              href="mailto:putz@3qmedien.net" target="_blank">putz@3qmedien.net</a>&gt;</span>
          wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0
            .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Milind,<br>
            <br>
            thank you for your reply...<br>
            meanwhile i realized that the setfattr command don't work so
            i decided to delete the affected files and directories but
            without stopping the geo-replication...i did it before i red
            you mail.<br>
            the affected folders are already replicated with the same
            gfid like on the master...so this is solved for the moment.<br>
            afterwards i did not receive the 'errcode: 23' messages on
            the masters and the 'Operation not permitted' messages on
            the slaves for 2 1/2 days but the geo-replication restarted
            all the time about every 2 - 4 hours on each active master /
            slave with below shown "OSError: [Errno 16] Device or
            resource busy" message on master and slave.<br>
            anytime the geo-replication restarts i saw following line
            embedded in the event of restart (as shown far below) :<span
              class=""><br>
              <br>
              I [dht-layout.c:663:dht_layout_normalize]
              0-aut-wien-vol-01-dht: Found anomalies in
              /.gfid/cd3fd9ba-34b8-4c6b-ba72-4796b80b0ff2/.dstXXb70G3x
              (gfid = 00000000-0000-0000-0000-000000000000). Holes=1
              overlaps=0<br>
              <br>
            </span>
            i have found 24 of these hidden .dst* folders. All of them
            are stored in the same 2 different subfolders on the master
            and slave but the shown .dstXXb70G3x is the only one that
            just exists on the slave volume. I checked that folder and
            deleted it on the slave since it was empty. i believe that
            such folders somehow belongs to the geo-replication process 
            but i have no details.<br>
            however, about one day after deletion this folder was
            recreated on the slave again but since deletion there are no
            more 'Found anomalies' messages.<br>
            currently the geo-replication still restarts frequently with
            the far below shown message and unfortunately some
            'Operation not permitted' messages appears again but for
            different files than before.<br>
            I already checked all folders on master/slave for different
            gfid's but there are no more different gfid's. i guess there
            is no way around to compare all gfid's of all files on
            master and slave...since it is a dist-repl. volume there are
            several million lines.<br>
            if geo-replication then still not works i will start the
            suggested strace of the gsyncd. in regard to the strace i
            have two questions...<br>
            <br>
            do i need to start the strace on all active masters / slaves
            or is it sufficient to trace one active master and the
            corresponding active slave ?<br>
            should i try to capture a geo-rep restart by the strace or
            is it sufficient to let it run one minute randomly ?<br>
            <br>
            <br>
            maybe this is of interest to solve the problem...on the
            active masters there are lines like :<br>
            ...<br>
            [2016-01-19 18:34:58.441606] W
            [master(/gluster-export):1015:process] _GMaster: incomplete
            sync, retrying changelogs: XSYNC-CHANGELOG.1453225971<br>
            [2016-01-19 18:36:27.313515] W
            [master(/gluster-export):1015:process] _GMaster: incomplete
            sync, retrying changelogs: XSYNC-CHANGELOG.1453225971<br>
            [2016-01-19 18:37:56.337660] W
            [master(/gluster-export):996:process] _GMaster: changelogs
            XSYNC-CHANGELOG.1453225971 could not be processed - moving
            on...<br>
            [2016-01-19 18:37:56.339124] W
            [master(/gluster-export):1000:process] _GMaster: SKIPPED
            GFID = 5a47cc07-f32f-4685-ae8e-4969995f3f1c,&lt;huge list
            with gfid's&gt;<br>
            <br>
            they end up in a huge list of comma separated gfid's. is
            there any hint to get benefit out of this xsync-changelogs,
            a way to find out what was incomplete ?<br>
            <br>
            one more thing that concerns me...i'm trying to understand
            the distributed geo-replication.<br>
            the master volume is a living object, accessed by some
            clients who are upload, delete and recreate files and
            folders. not very frequent but i observed the mentioned two
            folders with different gfid's on master / slave and now they
            are deleted by any client on the master volume. The
            geo-replication is still in hybrid crawl and afaik can not
            delete or rename files and folders on the slave volume until
            changelog mode is reached. When now a client recreates the
            same folders on the master again they get a new gfid
            assigned which are different from the still existing gfid's
            on the slave i believe...so geo-replication should get in
            conflict again because of existing folders on the slave with
            the same path but a different gfid than on the master...like
            for any other files which are deleted and later recreated
            while geo-rep is in hybrid crawl. is that right ?<br>
            if so it will be difficult to reach the changelog modus on
            large gluster volumes because in our case the initial hybrid
            crawl took some days for about 45 TB...or r/w access needs
            to be stopped for that time ?<br>
            <br>
            thanks in advance and<br>
            best regards<span class="HOEnZb"><font color="#888888"><br>
                dietmar</font></span><span class=""><br>
              <br>
              <br>
              <br>
              <br>
              Am 19.01.2016 um 10:53 schrieb Milind Changire:<br>
            </span>
            <blockquote class="gmail_quote" style="margin:0 0 0
              .8ex;border-left:1px #ccc solid;padding-left:1ex"><span
                class="">
                Hi Dietmar,<br>
                After discussion with Aravinda we realized that
                unfortunately the suggestion to:<br>
              </span><span class="">
                       setfattr -n glusterfs.geo-rep.trigger-sync -v "1"
                &lt;DIR&gt;<br>
                       setfattr -n glusterfs.geo-rep.trigger-sync -v "1"
                &lt;file-path&gt;<br>
                <br>
              </span><span class="">
                won't work with 3.6.7, since provision for that
                workaround was added after 3.6.7.<br>
                <br>
                There's an alternative way to achieve the
                geo-replication:<br>
                1. stop geo-replication<br>
                2. delete files and directories with conflicting gfid on
                SLAVE<br>
                3. use the "touch" command to touch files and
                directories with conflicting gfid<br>
                    on MASTER<br>
                4. start geo-replication<br>
                <br>
                This *should* get things correctly replicated to SLAVE.<br>
                Geo-replication should start with hybrid-crawl and
                trigger the replication to SLAVE.<br>
                <br>
                If not, then there's more to look at.<br>
                You could then send us output of the strace command for
                the gsyncd process, while<br>
                geo-replication is running:<br>
                # strace -ff -p &lt;gsyncd-pid&gt; -o gsyncd-strace<br>
                <br>
                You could terminate strace after about one minute and
                send us all the gsyncd-strace.&lt;pid&gt;<br>
                files which will help us debug the issue if its not
                resolved by the alternative<br>
                mechanism mentioned above.<br>
                <br>
                Also, crawl status Hybrid Crawl is not an entirely bad
                thing. It just could mean<br>
                that there's a lot of entries that are being processed.
                However, if things don't<br>
                return back to the normal state after trying out the
                alternative suggestion, we<br>
                could take a look at the strace output and get some
                clues.<br>
                <br>
                --<br>
                Milind<br>
                <br>
              </span>
              <div>
                <div class="h5">
                  ----- Original Message -----<br>
                  From: "Dietmar Putz" &lt;<a moz-do-not-send="true"
                    href="mailto:putz@3qmedien.net" target="_blank">putz@3qmedien.net</a>&gt;<br>
                  To: "Aravinda" &lt;<a moz-do-not-send="true"
                    href="mailto:avishwan@redhat.com" target="_blank">avishwan@redhat.com</a>&gt;,
                  <a moz-do-not-send="true"
                    href="mailto:gluster-users@gluster.org"
                    target="_blank">gluster-users@gluster.org</a>,
                  "Milind Changire" &lt;<a moz-do-not-send="true"
                    href="mailto:mchangir@redhat.com" target="_blank">mchangir@redhat.com</a>&gt;<br>
                  Sent: Thursday, January 14, 2016 11:07:55 PM<br>
                  Subject: Re: [Gluster-users] geo-replication 3.6.7 -
                  no transition from hybrid to changelog crawl<br>
                  <br>
                  Hello all,<br>
                  <br>
                  after some days of inactivity i started another
                  attempt to solve this<br>
                  geo-replication issue...step by step.<br>
                  it looks like that some of the directories on the
                  slave volume does not<br>
                  have the same gfid like the corresponding directory on
                  the master volume.<br>
                  <br>
                  for example :<br>
                  <br>
                  on a master-node i can see a lot of 'errcode: 23'
                  lines like :<br>
                  [2016-01-14 09:58:36.96585] W
                  [master(/gluster-export):301:regjob]<br>
                  _GMaster: Rsync:
                  .gfid/a8d0387d-c5ad-4eeb-9fc6-637fb8299a50 [errcode:
                  23]<br>
                  <br>
                  on the corresponding slave the corresponding message :<br>
                  [2016-01-14 09:57:06.070452] W
                  [fuse-bridge.c:1967:fuse_create_cbk]<br>
                  0-glusterfs-fuse: 1185648:
                  /.gfid/a8d0387d-c5ad-4eeb-9fc6-637fb8299a50<br>
                  =&gt; -1 (Operation not permitted)<br>
                  <br>
                  This is the file on the master, the file is still not
                  replicated to the<br>
                  slave.<br>
                  <br>
                  120533444364 97332 -rw-r--r-- 2 2001 2001 99662854
                  Jan  8 13:40<br>
/gluster-export/3912/uploads/BSZ-2015/Z_002895D0-C832-4698-84E6-89F34CDEC2AE_20144555_ST_1.mp4<br>
                  120533444364 97332 -rw-r--r-- 2 2001 2001 99662854
                  Jan  8 13:40<br>
/gluster-export/.glusterfs/a8/d0/a8d0387d-c5ad-4eeb-9fc6-637fb8299a50<br>
                  <br>
                  The directory on the slave already contain some files,
                  all of them are<br>
                  not available on the master anymore, obviously deleted
                  in meantime on<br>
                  the master by a client.<br>
                  i have deleted and recreated this file on the master
                  and observed the<br>
                  logs for recurrence of the newly created gfid of this
                  file...same as before.<br>
                  <br>
                  in <a moz-do-not-send="true"
href="http://comments.gmane.org/gmane.comp.file-systems.gluster.user/20703"
                    rel="noreferrer" target="_blank">http://comments.gmane.org/gmane.comp.file-systems.gluster.user/20703</a><br>
                  a user reports a geo-replication problem which is
                  possibly caused by<br>
                  different gfid's of underlying directories.<br>
                  and yes, the directory of this file-example above
                  shows that the gfid of<br>
                  the underlying directory differs from the gfid on the
                  master while the<br>
                  most other directories have the same gfid.<br>
                  <br>
                  master :<br>
                  ...<br>
                  # file: gluster-export/3912/uploads/BSP-2012<br>
                  trusted.gfid=0x8f1d480351bb455b9adde190f2c2b350<br>
                  --------------<br>
                  # file: gluster-export/3912/uploads/BSZ-2003<br>
                  trusted.gfid=0xe80adc088e604234b778997d8e8c2018<br>
                  --------------<br>
                  # file: gluster-export/3912/uploads/BSZ-2004<br>
                  trusted.gfid=0xfe417dd16bbe4ae4a6a1936cfee7aced<br>
                  --------------<br>
                  # file: gluster-export/3912/uploads/BSZ-2010<br>
                  trusted.gfid=0x8044e436407d4ed3a67c81df8a7ad47f       
                  ###<br>
                  --------------<br>
                  # file: gluster-export/3912/uploads/BSZ-2015<br>
                  trusted.gfid=0x0c30f50480204e02b65d4716a048b029       
                  ###<br>
                  <br>
                  slave :<br>
                  ...<br>
                  # file: gluster-export/3912/uploads/BSP-2012<br>
                  trusted.gfid=0x8f1d480351bb455b9adde190f2c2b350<br>
                  --------------<br>
                  # file: gluster-export/3912/uploads/BSZ-2003<br>
                  trusted.gfid=0xe80adc088e604234b778997d8e8c2018<br>
                  --------------<br>
                  # file: gluster-export/3912/uploads/BSZ-2004<br>
                  trusted.gfid=0xfe417dd16bbe4ae4a6a1936cfee7aced<br>
                  --------------<br>
                  # file: gluster-export/3912/uploads/BSZ-2010<br>
                  trusted.gfid=0xd83e8fb568c74e33a2091c547512a6ce       
                  ###<br>
                  --------------<br>
                  # file: gluster-export/3912/uploads/BSZ-2015<br>
                  trusted.gfid=0xa406e1bec7f3454d8f2ce9c5f9c70eb3       
                  ###<br>
                  <br>
                  <br>
                  now the question...how to fix this..?<br>
                  in the thread above Aravinda wrote :<br>
                  <br>
                  ...<br>
                  To fix the issue,<br>
                  -----------------<br>
                  Find the parent directory of "main.mdb",<br>
                  Get the GFID of that directory, using getfattr<br>
                  Check the GFID of the same directory in Slave(To
                  confirm GFIDs are different)<br>
                  To fix the issue, Delete that directory in Slave.<br>
                  Set virtual xattr for that directory and all the files
                  inside that directory.<br>
                         setfattr -n glusterfs.geo-rep.trigger-sync -v
                  "1" &lt;DIR&gt;<br>
                         setfattr -n glusterfs.geo-rep.trigger-sync -v
                  "1" &lt;file-path&gt;<br>
                  <br>
                  Geo-rep will recreate the directory with Proper GFID
                  and starts sync.<br>
                  <br>
                  deletion of the affected slave directory might be
                  helpful...<br>
                  but do i have to execute above shown setfattr commands
                  on the master or<br>
                  do they just speed up synchronization ?<br>
                  usually sync should start automatically or could there
                  be a problem<br>
                  because crawl status is still in 'hybrid crawl'...?<br>
                  <br>
                  thanks in advance...<br>
                  best regards<br>
                  dietmar<br>
                  <br>
                  <br>
                  <br>
                  <br>
                  On 04.01.2016 12:08, Dietmar Putz wrote:<br>
                  <blockquote class="gmail_quote" style="margin:0 0 0
                    .8ex;border-left:1px #ccc solid;padding-left:1ex">
                    Hello Aravinda,<br>
                    <br>
                    thank you for your reply.<br>
                    i just made a 'find /gluster-export -type f -exec ls
                    -lisa {} \; &gt;<br>
                    ls-lisa-gluster-export-`hostname`.out' on each brick
                    and checked the<br>
                    output for files with less than 2 link counts.<br>
                    i found nothing...all files on each brick have exact
                    2 links.<br>
                    <br>
                    the entire output for all bricks contain more than 7
                    million lines<br>
                    including .glusterfs but without non relevant
                    directories and files..<br>
                    tron@dp-server:~/geo_rep_3$ cat
                    ls-lisa-gluster-wien-0* | egrep -v<br>
                    'indices|landfill|changelogs|health_check' | wc -l<br>
                    7007316<br>
                    <br>
                    link count is on $4 :<br>
                    tron@dp-server:~/geo_rep_3$ cat
                    ls-lisa-gluster-wien-0* | egrep -v<br>
                    'indices|landfill|changelogs|health_check' | awk
                    '{if($4=="2")print}'<br>
                    | tail -1<br>
                    62648153697 4 -rw-rw-rw- 2 root root 1713 Jan  4
                    01:44<br>
/gluster-export/3500/files/16/01/387233/3500-6dqMmBcVby97PQtR.ism<br>
                    <br>
                    tron@dp-server:~/geo_rep_3$ cat
                    ls-lisa-gluster-wien-0* | egrep -v<br>
                    'indices|landfill|changelogs|health_check' | awk
                    '{if($4=="1")print}'<br>
                    tron@dp-server:~/geo_rep_3$<br>
                    tron@dp-server:~/geo_rep_3$ cat
                    ls-lisa-gluster-wien-0* | egrep -v<br>
                    'indices|landfill|changelogs|health_check' | awk
                    '{if($4!="2")print}'<br>
                    tron@dp-server:~/geo_rep_3$<br>
                    tron@dp-server:~/geo_rep_3$ cat
                    ls-lisa-gluster-wien-0* | egrep -v<br>
                    'indices|landfill|changelogs|health_check' | awk
                    '{print $4}' | sort |<br>
                    uniq -c<br>
                    7007316 2<br>
                    tron@dp-server:~/geo_rep_3$<br>
                    <br>
                    If i understood you right this can not be the reason
                    for the problem.<br>
                    is there any other hint which i can check on the
                    master or slave to<br>
                    analyse the problem....?<br>
                    <br>
                    Any help would be very appreciated<br>
                    best regards<br>
                    dietmar<br>
                    <br>
                    <br>
                    <br>
                    Am 04.01.2016 um 07:14 schrieb Aravinda:<br>
                    <blockquote class="gmail_quote" style="margin:0 0 0
                      .8ex;border-left:1px #ccc solid;padding-left:1ex">
                      Hi,<br>
                      <br>
                      Looks like issue with Geo-rep due to race between
                      Create and Rename.<br>
                      Geo-replication uses gfid-access (Mount Volume
                      with aux-gfid-mount)<br>
                      to create and rename files. If Create and Rename
                      replayed more than<br>
                      once then Geo-rep creates a two files with same
                      GFID(not hardlink).<br>
                      This causes one file without backend GFID link.<br>
                      <br>
                      Milind is working on the patch to disallow the
                      creation of second<br>
                      file with same GFID.<br>
                      @Milind, Please provide more update about your
                      patch.<br>
                      <br>
                      As a workaround, identify all the files in Slave
                      volume which do not<br>
                      have backend links and delete those files(Only in
                      Slaves, keep backup<br>
                      if required)<br>
                      <br>
                      In Brick backend, Crawl and look for files with
                      link count less than<br>
                      2. (Exclude .glusterfs and .trashcan directory)<br>
                      <br>
                      regards<br>
                      Aravinda<br>
                      <br>
                      On 01/02/2016 09:56 PM, Dietmar Putz wrote:<br>
                      <blockquote class="gmail_quote" style="margin:0 0
                        0 .8ex;border-left:1px #ccc
                        solid;padding-left:1ex">
                        Hello all,<br>
                        <br>
                        one more time i need some help with a
                        geo-replication problem.<br>
                        recently i started a new geo-replication. the
                        master volume contains<br>
                        about 45 TB data and the slave volume was new
                        created before<br>
                        geo-replication setup was done.<br>
                        master and slave is a 6 node distributed
                        replicated volume running<br>
                        glusterfs-server 3.6.7-ubuntu1~trusty1.<br>
                        geo-rep was starting without problems. since few
                        days the slave<br>
                        volume contains about 200 GB more data than the
                        master volume and i<br>
                        expected that the crawl status changes from
                        'hybrid crawl' to<br>
                        'changelog crawl' but it remains in 'hybrid
                        crawl'.<br>
                        the 'status detail' output far below shows more
                        than 10 million<br>
                        synced files while the entire master volume
                        contains just about 2<br>
                        million files. some tests show that files are
                        not deleted on the<br>
                        slave volume.<br>
                        as far as i know the hybrid crawl has the
                        limitation of not<br>
                        replicating deletes and renames to the slave
                        thus the geo-rep needs<br>
                        to achieve the 'changelog crawl' status after
                        initial sync...<br>
                        usually this should happen more or less
                        automatically, is this right ?<br>
                        <br>
                        the geo-rep frequently fails with below shown
                        "OSError: [Errno 16]<br>
                        Device or resource busy", this error appears
                        about every 3-4 hours<br>
                        on each active master node.<br>
                        i guess the frequent appearance of this error
                        prevent geo-rep from<br>
                        changing to 'changelog crawl', does somebody
                        experienced such<br>
                        problem, is this the cause of the problem ?<br>
                        <br>
                        i found some similar reports on <a
                          moz-do-not-send="true"
                          href="http://gluster.org" rel="noreferrer"
                          target="_blank">gluster.org</a> for gfs 3.5,
                        3.6 and 3.7<br>
                        but none of them point me to a solution...<br>
                        does anybody know a solution or is there a
                        workaround to achieve the<br>
                        changelog crawl status...?<br>
                        <br>
                        Any help would be very appreciated<br>
                        best regards<br>
                        dietmar<br>
                        <br>
                        <br>
                        <br>
                        <br>
                        Master gluster-ger-ber-07:<br>
                        -----------------------------<br>
                        <br>
                        [2016-01-02 11:39:48.122546] I
                        [master(/gluster-export):1343:crawl]<br>
                        _GMaster: processing xsync changelog<br>
/var/lib/misc/glusterfsd/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01/9d7139ecf10a6fc33a<br>
                        6e41d8d6e56984/xsync/XSYNC-CHANGELOG.1451724692<br>
                        [2016-01-02 11:42:55.182342] I
                        [master(/gluster-export):1343:crawl]<br>
                        _GMaster: processing xsync changelog<br>
/var/lib/misc/glusterfsd/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01/9d7139ecf10a6fc33a<br>
                        6e41d8d6e56984/xsync/XSYNC-CHANGELOG.1451724751<br>
                        [2016-01-02 11:44:11.168962] I
                        [master(/gluster-export):1340:crawl]<br>
                        _GMaster: finished hybrid crawl syncing, stime:
                        (-1, 0)<br>
                        [2016-01-02 11:44:11.246845] I<br>
                        [master(/gluster-export):490:crawlwrap]
                        _GMaster: primary master<br>
                        with volume id
                        6a071cfa-b150-4f0b-b1ed-96ab5d4bd671 ...<br>
                        [2016-01-02 11:44:11.265209] I<br>
                        [master(/gluster-export):501:crawlwrap]
                        _GMaster: crawl interval: 3<br>
                        seconds<br>
                        [2016-01-02 11:44:11.896940] I
                        [master(/gluster-export):1192:crawl]<br>
                        _GMaster: slave's time: (-1, 0)<br>
                        [2016-01-02 11:44:12.171761] E
                        [repce(/gluster-export):207:__call__]<br>
                        RepceClient: call
                        18897:139899553576768:1451735052.09 (entry_ops)<br>
                        failed on peer with OSError<br>
                        [2016-01-02 11:44:12.172101] E<br>
                        [syncdutils(/gluster-export):270:log_raise_exception]
                        &lt;top&gt;: FAIL:<br>
                        Traceback (most recent call last):<br>
                           File<br>
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py",<br>
                        line 164, in main<br>
                             main_i()<br>
                           File<br>
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py",<br>
                        line 643, in main_i<br>
                             local.service_loop(*[r for r in [remote] if
                        r])<br>
                           File<br>
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py",<br>
                        line 1344, in service_loop<br>
                             g2.crawlwrap()<br>
                           File<br>
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py",<br>
                        line 539, in crawlwrap<br>
                             self.crawl(no_stime_update=no_stime_update)<br>
                           File<br>
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py",<br>
                        line 1204, in crawl<br>
                             self.process(changes)<br>
                           File<br>
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py",<br>
                        line 956, in process<br>
                             self.process_change(change, done, retry)<br>
                           File<br>
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py",<br>
                        line 920, in process_change<br>
                             self.slave.server.entry_ops(entries)<br>
                           File<br>
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py",<br>
                        line 226, in __call__<br>
                             return self.ins(self.meth, *a)<br>
                           File<br>
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py",<br>
                        line 208, in __call__<br>
                             raise res<br>
                        OSError: [Errno 16] Device or resource busy<br>
                        [2016-01-02 11:44:12.258982] I<br>
                        [syncdutils(/gluster-export):214:finalize]
                        &lt;top&gt;: exiting.<br>
                        [2016-01-02 11:44:12.321808] I
                        [repce(agent):92:service_loop]<br>
                        RepceServer: terminating on reaching EOF.<br>
                        [2016-01-02 11:44:12.349766] I
                        [syncdutils(agent):214:finalize]<br>
                        &lt;top&gt;: exiting.<br>
                        [2016-01-02 11:44:12.435992] I
                        [monitor(monitor):141:set_state]<br>
                        Monitor: new state: faulty<br>
                        [2016-01-02 11:44:23.164284] I
                        [monitor(monitor):215:monitor]<br>
                        Monitor:
                        ------------------------------------------------------------<br>
                        [2016-01-02 11:44:23.169981] I
                        [monitor(monitor):216:monitor]<br>
                        Monitor: starting gsyncd worker<br>
                        [2016-01-02 11:44:23.216662] I
                        [changelogagent(agent):72:__init__]<br>
                        ChangelogAgent: Agent listining...<br>
                        [2016-01-02 11:44:23.239778] I
                        [gsyncd(/gluster-export):633:main_i]<br>
                        &lt;top&gt;: syncing:
                        gluster://localhost:ger-ber-01 -&gt;<br>
<a class="moz-txt-link-freetext" href="ssh://root@gluster-wien-07-int:gluster://localhost:aut-wien-vol-01">ssh://root@gluster-wien-07-int:gluster://localhost:aut-wien-vol-01</a><br>
                        [2016-01-02 11:44:26.358613] I<br>
                        [master(/gluster-export):75:gmaster_builder]
                        &lt;top&gt;: setting up xsync<br>
                        change detection mode<br>
                        [2016-01-02 11:44:26.358983] I<br>
                        [master(/gluster-export):413:__init__] _GMaster:
                        using 'rsync' as<br>
                        the sync engine<br>
                        [2016-01-02 11:44:26.359985] I<br>
                        [master(/gluster-export):75:gmaster_builder]
                        &lt;top&gt;: setting up<br>
                        changelog change detection mode<br>
                        [2016-01-02 11:44:26.360243] I<br>
                        [master(/gluster-export):413:__init__] _GMaster:
                        using 'rsync' as<br>
                        the sync engine<br>
                        [2016-01-02 11:44:26.361159] I<br>
                        [master(/gluster-export):75:gmaster_builder]
                        &lt;top&gt;: setting up<br>
                        changeloghistory change detection mode<br>
                        [2016-01-02 11:44:26.361377] I<br>
                        [master(/gluster-export):413:__init__] _GMaster:
                        using 'rsync' as<br>
                        the sync engine<br>
                        [2016-01-02 11:44:26.402601] I<br>
                        [master(/gluster-export):1311:register]
                        _GMaster: xsync temp<br>
                        directory:<br>
/var/lib/misc/glusterfsd/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01/9d7139ecf10a6fc33a6e<br>
                        41d8d6e56984/xsync<br>
                        [2016-01-02 11:44:26.402848] I<br>
                        [resource(/gluster-export):1318:service_loop]
                        GLUSTER: Register<br>
                        time: 1451735066<br>
                        [2016-01-02 11:44:27.26012] I<br>
                        [master(/gluster-export):490:crawlwrap]
                        _GMaster: primary master<br>
                        with volume id
                        6a071cfa-b150-4f0b-b1ed-96ab5d4bd671 ...<br>
                        [2016-01-02 11:44:27.31605] I<br>
                        [master(/gluster-export):501:crawlwrap]
                        _GMaster: crawl interval: 1<br>
                        seconds<br>
                        [2016-01-02 11:44:27.66868] I
                        [master(/gluster-export):1226:crawl]<br>
                        _GMaster: starting history crawl... turns: 1,
                        stime: (-1, 0)<br>
                        [2016-01-02 11:44:27.67043] I
                        [master(/gluster-export):1229:crawl]<br>
                        _GMaster: stime not available, abandoning
                        history crawl<br>
                        [2016-01-02 11:44:27.112426] I<br>
                        [master(/gluster-export):490:crawlwrap]
                        _GMaster: primary master<br>
                        with volume id
                        6a071cfa-b150-4f0b-b1ed-96ab5d4bd671 ...<br>
                        [2016-01-02 11:44:27.117506] I<br>
                        [master(/gluster-export):501:crawlwrap]
                        _GMaster: crawl interval: 60<br>
                        seconds<br>
                        [2016-01-02 11:44:27.140610] I
                        [master(/gluster-export):1333:crawl]<br>
                        _GMaster: starting hybrid crawl..., stime: (-1,
                        0)<br>
                        [2016-01-02 11:45:23.417233] I
                        [monitor(monitor):141:set_state]<br>
                        Monitor: new state: Stable<br>
                        [2016-01-02 11:45:48.225915] I
                        [master(/gluster-export):1343:crawl]<br>
                        _GMaster: processing xsync changelog<br>
/var/lib/misc/glusterfsd/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01/9d7139ecf10a6fc33a<br>
                        6e41d8d6e56984/xsync/XSYNC-CHANGELOG.1451735067<br>
                        [2016-01-02 11:47:08.65231] I
                        [master(/gluster-export):1343:crawl]<br>
                        _GMaster: processing xsync changelog<br>
/var/lib/misc/glusterfsd/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01/9d7139ecf10a6fc33a6<br>
                        e41d8d6e56984/xsync/XSYNC-CHANGELOG.1451735148<br>
                        ...<br>
                        <br>
                        <br>
                        slave gluster-wien-07 :<br>
                        ------------------------<br>
                        <br>
                        [2016-01-02 11:44:12.007744] W
                        [fuse-bridge.c:1261:fuse_err_cbk]<br>
                        0-glusterfs-fuse: 1959820: SETXATTR()<br>
                        /.gfid/5e436e5b-086b-4720-9e70-0e49c8e09698
                        =&gt; -1 (File exists)<br>
                        [2016-01-02 11:44:12.010970] W<br>
                        [client-rpc-fops.c:240:client3_3_mknod_cbk]<br>
                        0-aut-wien-vol-01-client-5: remote operation
                        failed: File exists.<br>
                        Path:<br>
&lt;gfid:666bceac-7c14-4efd-81fe-8185458fcf1f&gt;/11-kxyrM3NgdtBWPFv4.webm<br>
                        [2016-01-02 11:44:12.011327] W<br>
                        [client-rpc-fops.c:240:client3_3_mknod_cbk]<br>
                        0-aut-wien-vol-01-client-4: remote operation
                        failed: File exists.<br>
                        Path:<br>
&lt;gfid:666bceac-7c14-4efd-81fe-8185458fcf1f&gt;/11-kxyrM3NgdtBWPFv4.webm<br>
                        [2016-01-02 11:44:12.012054] W
                        [fuse-bridge.c:1261:fuse_err_cbk]<br>
                        0-glusterfs-fuse: 1959822: SETXATTR()<br>
                        /.gfid/666bceac-7c14-4efd-81fe-8185458fcf1f
                        =&gt; -1 (File exists)<br>
                        [2016-01-02 11:44:12.024743] W<br>
                        [client-rpc-fops.c:240:client3_3_mknod_cbk]<br>
                        0-aut-wien-vol-01-client-5: remote operation
                        failed: File exists.<br>
                        Path:
                        &lt;gfid:5bfd6f99-07e8-4b2f-844b-aa0b6535c055&gt;/Gf4FYbpDTC7yK2mv.png<br>
                        [2016-01-02 11:44:12.024970] W<br>
                        [client-rpc-fops.c:240:client3_3_mknod_cbk]<br>
                        0-aut-wien-vol-01-client-4: remote operation
                        failed: File exists.<br>
                        Path:
                        &lt;gfid:5bfd6f99-07e8-4b2f-844b-aa0b6535c055&gt;/Gf4FYbpDTC7yK2mv.png<br>
                        [2016-01-02 11:44:12.025601] W
                        [fuse-bridge.c:1261:fuse_err_cbk]<br>
                        0-glusterfs-fuse: 1959823: SETXATTR()<br>
                        /.gfid/5bfd6f99-07e8-4b2f-844b-aa0b6535c055
                        =&gt; -1 (File exists)<br>
                        [2016-01-02 11:44:12.100688] I<br>
[dht-selfheal.c:1065:dht_selfheal_layout_new_directory]<br>
                        0-aut-wien-vol-01-dht: chunk size = 0xffffffff /
                        57217563 = 0x4b<br>
                        [2016-01-02 11:44:12.100765] I<br>
[dht-selfheal.c:1103:dht_selfheal_layout_new_directory]<br>
                        0-aut-wien-vol-01-dht: assigning range size
                        0x5542c4a3 to<br>
                        aut-wien-vol-01-replicate-0<br>
                        [2016-01-02 11:44:12.100785] I<br>
[dht-selfheal.c:1103:dht_selfheal_layout_new_directory]<br>
                        0-aut-wien-vol-01-dht: assigning range size
                        0x5542c4a3 to<br>
                        aut-wien-vol-01-replicate-1<br>
                        [2016-01-02 11:44:12.100800] I<br>
[dht-selfheal.c:1103:dht_selfheal_layout_new_directory]<br>
                        0-aut-wien-vol-01-dht: assigning range size
                        0x5542c4a3 to<br>
                        aut-wien-vol-01-replicate-2<br>
                        [2016-01-02 11:44:12.100839] I [MSGID: 109036]<br>
[dht-common.c:6296:dht_log_new_layout_for_dir_selfheal]<br>
                        0-aut-wien-vol-01-dht: Setting layout of<br>
                        &lt;gfid:d4815ee4-3348-4105-9136-d0219d956ed8&gt;/.dstXXX0HUpRD
                        with<br>
                        [Subvol_name: aut-wien-vol-01-re<br>
                        plicate-0, Err: -1 , Start: 0 , Stop: 1430439074
                        ], [Subvol_name:<br>
                        aut-wien-vol-01-replicate-1, Err: -1 , Start:
                        1430439075 , Stop:<br>
                        2860878149 ], [Subvol_name:
                        aut-wien-vol-01-replicate-2, Err: -1 ,<br>
                        Start: 2860878150 , Stop: 4294967295 ],<br>
                        [2016-01-02 11:44:12.114192] W<br>
                        [client-rpc-fops.c:306:client3_3_mkdir_cbk]<br>
                        0-aut-wien-vol-01-client-2: remote operation
                        failed: File exists.<br>
                        Path:
                        &lt;gfid:cd3fd9ba-34b8-4c6b-ba72-4796b80b0ff2&gt;/.dstXXb70G3x<br>
                        [2016-01-02 11:44:12.114275] W<br>
                        [client-rpc-fops.c:306:client3_3_mkdir_cbk]<br>
                        0-aut-wien-vol-01-client-3: remote operation
                        failed: File exists.<br>
                        Path:
                        &lt;gfid:cd3fd9ba-34b8-4c6b-ba72-4796b80b0ff2&gt;/.dstXXb70G3x<br>
                        [2016-01-02 11:44:12.114879] W
                        [fuse-bridge.c:1261:fuse_err_cbk]<br>
                        0-glusterfs-fuse: 1959831: SETXATTR()<br>
                        /.gfid/cd3fd9ba-34b8-4c6b-ba72-4796b80b0ff2
                        =&gt; -1 (File exists)<br>
                        [2016-01-02 11:44:12.118473] I<br>
                        [dht-layout.c:663:dht_layout_normalize]
                        0-aut-wien-vol-01-dht: Found<br>
                        anomalies in<br>
                        /.gfid/cd3fd9ba-34b8-4c6b-ba72-4796b80b0ff2/.dstXXb70G3x
                        (gfid =<br>
                        00000000-0000-0000-0000-000000000000). Holes=1
                        overlaps=0<br>
                        [2016-01-02 11:44:12.118537] I<br>
[dht-selfheal.c:1065:dht_selfheal_layout_new_directory]<br>
                        0-aut-wien-vol-01-dht: chunk size = 0xffffffff /
                        57217563 = 0x4b<br>
                        [2016-01-02 11:44:12.118562] I<br>
[dht-selfheal.c:1103:dht_selfheal_layout_new_directory]<br>
                        0-aut-wien-vol-01-dht: assigning range size
                        0x5542c4a3 to<br>
                        aut-wien-vol-01-replicate-2<br>
                        [2016-01-02 11:44:12.118579] I<br>
[dht-selfheal.c:1103:dht_selfheal_layout_new_directory]<br>
                        0-aut-wien-vol-01-dht: assigning range size
                        0x5542c4a3 to<br>
                        aut-wien-vol-01-replicate-0<br>
                        [2016-01-02 11:44:12.118613] I<br>
[dht-selfheal.c:1103:dht_selfheal_layout_new_directory]<br>
                        0-aut-wien-vol-01-dht: assigning range size
                        0x5542c4a3 to<br>
                        aut-wien-vol-01-replicate-1<br>
                        [2016-01-02 11:44:12.120352] I [MSGID: 109036]<br>
[dht-common.c:6296:dht_log_new_layout_for_dir_selfheal]<br>
                        0-aut-wien-vol-01-dht: Setting layout of<br>
                        /.gfid/cd3fd9ba-34b8-4c6b-ba72-4796b80b0ff2/.dstXXb70G3x
                        with<br>
                        [Subvol_name: aut-wien-vol-01-rep<br>
                        licate-0, Err: -1 , Start: 1430439075 , Stop:
                        2860878149 ],<br>
                        [Subvol_name: aut-wien-vol-01-replicate-1, Err:
                        -1 , Start:<br>
                        2860878150 , Stop: 4294967295 ], [Subvol_name:<br>
                        aut-wien-vol-01-replicate-2, Err: -1 , Start: 0
                        , Stop: 1430439074 ],<br>
                        [2016-01-02 11:44:12.630949] I
                        [fuse-bridge.c:4927:fuse_thread_proc]<br>
                        0-fuse: unmounting /tmp/gsyncd-aux-mount-tOUOsz<br>
                        [2016-01-02 11:44:12.633952] W
                        [glusterfsd.c:1211:cleanup_and_exit]<br>
                        (--&gt; 0-: received signum (15), shutting down<br>
                        [2016-01-02 11:44:12.633964] I
                        [fuse-bridge.c:5607:fini] 0-fuse:<br>
                        Unmounting '/tmp/gsyncd-aux-mount-tOUOsz'.<br>
                        [2016-01-02 11:44:23.946702] I [MSGID: 100030]<br>
                        [glusterfsd.c:2035:main] 0-/usr/sbin/glusterfs:
                        Started running<br>
                        /usr/sbin/glusterfs version 3.6.7 (args:
                        /usr/sbin/glusterfs<br>
                        --aux-gfid-mount
                        --log-file=/var/log/glusterfs/geo-replication-slav<br>
es/6a071cfa-b150-4f0b-b1ed-96ab5d4bd671:gluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01.gluster.log<br>
                        --volfile-server=localhost
                        --volfile-id=aut-wien-vol-01<br>
                        --client-pid=-1 /tmp/gsyncd-aux-mount-otU3wS)<br>
                        [2016-01-02 11:44:24.042128] I
                        [dht-shared.c:337:dht_init_regex]<br>
                        0-aut-wien-vol-01-dht: using regex
                        rsync-hash-regex = ^\.(.+)\.[^.]+$<br>
                        [2016-01-02 11:44:24.046315] I
                        [client.c:2268:notify]<br>
                        0-aut-wien-vol-01-client-0: parent translators
                        are ready, attempting<br>
                        connect on transport<br>
                        [2016-01-02 11:44:24.046532] I
                        [client.c:2268:notify]<br>
                        0-aut-wien-vol-01-client-1: parent translators
                        are ready, attempting<br>
                        connect on transport<br>
                        [2016-01-02 11:44:24.046664] I
                        [client.c:2268:notify]<br>
                        0-aut-wien-vol-01-client-2: parent translators
                        are ready, attempting<br>
                        connect on transport<br>
                        [2016-01-02 11:44:24.046806] I
                        [client.c:2268:notify]<br>
                        0-aut-wien-vol-01-client-3: parent translators
                        are ready, attempting<br>
                        connect on transport<br>
                        [2016-01-02 11:44:24.046940] I
                        [client.c:2268:notify]<br>
                        0-aut-wien-vol-01-client-4: parent translators
                        are ready, attempting<br>
                        connect on transport<br>
                        [2016-01-02 11:44:24.047070] I
                        [client.c:2268:notify]<br>
                        0-aut-wien-vol-01-client-5: parent translators
                        are ready, attempting<br>
                        connect on transport<br>
                        Final graph:<br>
+------------------------------------------------------------------------------+<br>
                        <br>
                           1: volume aut-wien-vol-01-client-0<br>
                           2:     type protocol/client<br>
                           3:     option ping-timeout 10<br>
                           4:     option remote-host gluster-wien-02-int<br>
                           5:     option remote-subvolume
                        /gluster-export<br>
                           6:     option transport-type socket<br>
                           7:     option username
                        6b3d1fae-fa3e-4305-a4a0-fd27c7ac9929<br>
                           8:     option password
                        8777e154-476c-449a-89b2-3199872e4a1f<br>
                           9:     option send-gids true<br>
                          10: end-volume<br>
                          11:<br>
                          12: volume aut-wien-vol-01-client-1<br>
                          13:     type protocol/client<br>
                          14:     option ping-timeout 10<br>
                          15:     option remote-host gluster-wien-03-int<br>
                          16:     option remote-subvolume
                        /gluster-export<br>
                          17:     option transport-type socket<br>
                          18:     option username
                        6b3d1fae-fa3e-4305-a4a0-fd27c7ac9929<br>
                          19:     option password
                        8777e154-476c-449a-89b2-3199872e4a1f<br>
                          20:     option send-gids true<br>
                          21: end-volume<br>
                          22:<br>
                          23: volume aut-wien-vol-01-replicate-0<br>
                          24:     type cluster/replicate<br>
                          25:     subvolumes aut-wien-vol-01-client-0
                        aut-wien-vol-01-client-1<br>
                          26: end-volume<br>
                          27:<br>
                          28: volume aut-wien-vol-01-client-2<br>
                          29:     type protocol/client<br>
                          30:     option ping-timeout 10<br>
                          31:     option remote-host gluster-wien-04-int<br>
                          32:     option remote-subvolume
                        /gluster-export<br>
                          33:     option transport-type socket<br>
                          34:     option username
                        6b3d1fae-fa3e-4305-a4a0-fd27c7ac9929<br>
                          35:     option password
                        8777e154-476c-449a-89b2-3199872e4a1f<br>
                          36:     option send-gids true<br>
                          37: end-volume<br>
                          38:<br>
                          39: volume aut-wien-vol-01-client-3<br>
                          40:     type protocol/client<br>
                          41:     option ping-timeout 10<br>
                          42:     option remote-host gluster-wien-05-int<br>
                          43:     option remote-subvolume
                        /gluster-export<br>
                          44:     option transport-type socket<br>
                          45:     option username
                        6b3d1fae-fa3e-4305-a4a0-fd27c7ac9929<br>
                          46:     option password
                        8777e154-476c-449a-89b2-3199872e4a1f<br>
                          47:     option send-gids true<br>
                          48: end-volume<br>
                          49:<br>
                          50: volume aut-wien-vol-01-replicate-1<br>
                          51:     type cluster/replicate<br>
                          52:     subvolumes aut-wien-vol-01-client-2
                        aut-wien-vol-01-client-3<br>
                          53: end-volume<br>
                          54:<br>
                          55: volume aut-wien-vol-01-client-4<br>
                          56:     type protocol/client<br>
                          57:     option ping-timeout 10<br>
                          58:     option remote-host gluster-wien-06-int<br>
                          59:     option remote-subvolume
                        /gluster-export<br>
                          60:     option transport-type socket<br>
                          61:     option username
                        6b3d1fae-fa3e-4305-a4a0-fd27c7ac9929<br>
                          62:     option password
                        8777e154-476c-449a-89b2-3199872e4a1f<br>
                          63:     option send-gids true<br>
                          64: end-volume<br>
                          65:<br>
                          66: volume aut-wien-vol-01-client-5<br>
                          67:     type protocol/client<br>
                          68:     option ping-timeout 10<br>
                          69:     option remote-host gluster-wien-07-int<br>
                          70:     option remote-subvolume
                        /gluster-export<br>
                          71:     option transport-type socket<br>
                          72:     option username
                        6b3d1fae-fa3e-4305-a4a0-fd27c7ac9929<br>
                          73:     option password
                        8777e154-476c-449a-89b2-3199872e4a1f<br>
                          74:     option send-gids true<br>
                          75: end-volume<br>
                          76:<br>
                          77: volume aut-wien-vol-01-replicate-2<br>
                          78:     type cluster/replicate<br>
                          79:     subvolumes aut-wien-vol-01-client-4
                        aut-wien-vol-01-client-5<br>
                          80: end-volume<br>
                          81:<br>
                          82: volume aut-wien-vol-01-dht<br>
                          83:     type cluster/distribute<br>
                          84:     subvolumes aut-wien-vol-01-replicate-0<br>
                        aut-wien-vol-01-replicate-1
                        aut-wien-vol-01-replicate-2<br>
                          85: end-volume<br>
                          86:<br>
                          87: volume aut-wien-vol-01-write-behind<br>
                          88:     type performance/write-behind<br>
                          89:     subvolumes aut-wien-vol-01-dht<br>
                          90: end-volume<br>
                          91:<br>
                          92: volume aut-wien-vol-01-read-ahead<br>
                          93:     type performance/read-ahead<br>
                          94:     subvolumes
                        aut-wien-vol-01-write-behind<br>
                          95: end-volume<br>
                          96:<br>
                          97: volume aut-wien-vol-01-io-cache<br>
                          98:     type performance/io-cache<br>
                          99:     option min-file-size 0<br>
                        100:     option cache-timeout 2<br>
                        101:     option cache-size 1024MB<br>
                        102:     subvolumes aut-wien-vol-01-read-ahead<br>
                        103: end-volume<br>
                        104:<br>
                        105: volume aut-wien-vol-01-quick-read<br>
                        106:     type performance/quick-read<br>
                        107:     option cache-size 1024MB<br>
                        108:     subvolumes aut-wien-vol-01-io-cache<br>
                        109: end-volume<br>
                        110:<br>
                        111: volume aut-wien-vol-01-open-behind<br>
                        112:     type performance/open-behind<br>
                        113:     subvolumes aut-wien-vol-01-quick-read<br>
                        114: end-volume<br>
                        115:<br>
                        116: volume aut-wien-vol-01-md-cache<br>
                        117:     type performance/md-cache<br>
                        118:     subvolumes aut-wien-vol-01-open-behind<br>
                        119: end-volume<br>
                        120:<br>
                        121: volume aut-wien-vol-01<br>
                        122:     type debug/io-stats<br>
                        123:     option latency-measurement off<br>
                        124:     option count-fop-hits off<br>
                        125:     subvolumes aut-wien-vol-01-md-cache<br>
                        126: end-volume<br>
                        127:<br>
                        128: volume gfid-access-autoload<br>
                        129:     type features/gfid-access<br>
                        130:     subvolumes aut-wien-vol-01<br>
                        131: end-volume<br>
                        132:<br>
                        133: volume meta-autoload<br>
                        134:     type meta<br>
                        135:     subvolumes gfid-access-autoload<br>
                        136: end-volume<br>
                        137:<br>
+------------------------------------------------------------------------------+<br>
                        <br>
                        [2016-01-02 11:44:24.047642] I
                        [rpc-clnt.c:1761:rpc_clnt_reconfig]<br>
                        0-aut-wien-vol-01-client-5: changing port to
                        49153 (from 0)<br>
                        [2016-01-02 11:44:24.047927] I<br>
[client-handshake.c:1413:select_server_supported_programs]<br>
                        0-aut-wien-vol-01-client-5: Using Program
                        GlusterFS 3.3, Num<br>
                        (1298437), Version (330)<br>
                        [2016-01-02 11:44:24.048044] I<br>
                        [client-handshake.c:1200:client_setvolume_cbk]<br>
                        0-aut-wien-vol-01-client-5: Connected to
                        aut-wien-vol-01-client-5,<br>
                        attached to remote volume '/gluster-export'.<br>
                        [2016-01-02 11:44:24.048050] I<br>
                        [client-handshake.c:1210:client_setvolume_cbk]<br>
                        0-aut-wien-vol-01-client-5: Server and Client
                        lk-version numbers are<br>
                        not same, reopening the fds<br>
                        [2016-01-02 11:44:24.048088] I [MSGID: 108005]<br>
                        [afr-common.c:3684:afr_notify]
                        0-aut-wien-vol-01-replicate-2:<br>
                        Subvolume 'aut-wien-vol-01-client-5' came back
                        up; going online.<br>
                        [2016-01-02 11:44:24.048114] I<br>
[client-handshake.c:188:client_set_lk_version_cbk]<br>
                        0-aut-wien-vol-01-client-5: Server lk version =
                        1<br>
                        [2016-01-02 11:44:24.048124] I
                        [rpc-clnt.c:1761:rpc_clnt_reconfig]<br>
                        0-aut-wien-vol-01-client-0: changing port to
                        49153 (from 0)<br>
                        [2016-01-02 11:44:24.048132] I
                        [rpc-clnt.c:1761:rpc_clnt_reconfig]<br>
                        0-aut-wien-vol-01-client-1: changing port to
                        49153 (from 0)<br>
                        [2016-01-02 11:44:24.048138] I
                        [rpc-clnt.c:1761:rpc_clnt_reconfig]<br>
                        0-aut-wien-vol-01-client-2: changing port to
                        49153 (from 0)<br>
                        [2016-01-02 11:44:24.048146] I
                        [rpc-clnt.c:1761:rpc_clnt_reconfig]<br>
                        0-aut-wien-vol-01-client-3: changing port to
                        49153 (from 0)<br>
                        [2016-01-02 11:44:24.048153] I
                        [rpc-clnt.c:1761:rpc_clnt_reconfig]<br>
                        0-aut-wien-vol-01-client-4: changing port to
                        49153 (from 0)<br>
                        [2016-01-02 11:44:24.049070] I<br>
[client-handshake.c:1413:select_server_supported_programs]<br>
                        0-aut-wien-vol-01-client-0: Using Program
                        GlusterFS 3.3, Num<br>
                        (1298437), Version (330)<br>
                        [2016-01-02 11:44:24.049094] I<br>
[client-handshake.c:1413:select_server_supported_programs]<br>
                        0-aut-wien-vol-01-client-3: Using Program
                        GlusterFS 3.3, Num<br>
                        (1298437), Version (330)<br>
                        [2016-01-02 11:44:24.049113] I<br>
[client-handshake.c:1413:select_server_supported_programs]<br>
                        0-aut-wien-vol-01-client-2: Using Program
                        GlusterFS 3.3, Num<br>
                        (1298437), Version (330)<br>
                        [2016-01-02 11:44:24.049131] I<br>
[client-handshake.c:1413:select_server_supported_programs]<br>
                        0-aut-wien-vol-01-client-1: Using Program
                        GlusterFS 3.3, Num<br>
                        (1298437), Version (330)<br>
                        [2016-01-02 11:44:24.049224] I<br>
[client-handshake.c:1413:select_server_supported_programs]<br>
                        0-aut-wien-vol-01-client-4: Using Program
                        GlusterFS 3.3, Num<br>
                        (1298437), Version (330)<br>
                        [2016-01-02 11:44:24.049307] I<br>
                        [client-handshake.c:1200:client_setvolume_cbk]<br>
                        0-aut-wien-vol-01-client-0: Connected to
                        aut-wien-vol-01-client-0,<br>
                        attached to remote volume '/gluster-export'.<br>
                        [2016-01-02 11:44:24.049312] I<br>
                        [client-handshake.c:1210:client_setvolume_cbk]<br>
                        0-aut-wien-vol-01-client-0: Server and Client
                        lk-version numbers are<br>
                        not same, reopening the fds<br>
                        [2016-01-02 11:44:24.049324] I [MSGID: 108005]<br>
                        [afr-common.c:3684:afr_notify]
                        0-aut-wien-vol-01-replicate-0:<br>
                        Subvolume 'aut-wien-vol-01-client-0' came back
                        up; going online.<br>
                        [2016-01-02 11:44:24.049384] I<br>
                        [client-handshake.c:1200:client_setvolume_cbk]<br>
                        0-aut-wien-vol-01-client-3: Connected to
                        aut-wien-vol-01-client-3,<br>
                        attached to remote volume '/gluster-export'.<br>
                        [2016-01-02 11:44:24.049389] I<br>
                        [client-handshake.c:1210:client_setvolume_cbk]<br>
                        0-aut-wien-vol-01-client-3: Server and Client
                        lk-version numbers are<br>
                        not same, reopening the fds<br>
                        [2016-01-02 11:44:24.049400] I [MSGID: 108005]<br>
                        [afr-common.c:3684:afr_notify]
                        0-aut-wien-vol-01-replicate-1:<br>
                        Subvolume 'aut-wien-vol-01-client-3' came back
                        up; going online.<br>
                        [2016-01-02 11:44:24.049418] I<br>
                        [client-handshake.c:1200:client_setvolume_cbk]<br>
                        0-aut-wien-vol-01-client-2: Connected to
                        aut-wien-vol-01-client-2,<br>
                        attached to remote volume '/gluster-export'.<br>
                        [2016-01-02 11:44:24.049422] I<br>
                        [client-handshake.c:1210:client_setvolume_cbk]<br>
                        0-aut-wien-vol-01-client-2: Server and Client
                        lk-version numbers are<br>
                        not same, reopening the fds<br>
                        [2016-01-02 11:44:24.049460] I<br>
                        [client-handshake.c:1200:client_setvolume_cbk]<br>
                        0-aut-wien-vol-01-client-1: Connected to
                        aut-wien-vol-01-client-1,<br>
                        attached to remote volume '/gluster-export'.<br>
                        [2016-01-02 11:44:24.049465] I<br>
                        [client-handshake.c:1210:client_setvolume_cbk]<br>
                        0-aut-wien-vol-01-client-1: Server and Client
                        lk-version numbers are<br>
                        not same, reopening the fds<br>
                        [2016-01-02 11:44:24.049493] I<br>
[client-handshake.c:188:client_set_lk_version_cbk]<br>
                        0-aut-wien-vol-01-client-0: Server lk version =
                        1<br>
                        [2016-01-02 11:44:24.049567] I<br>
[client-handshake.c:188:client_set_lk_version_cbk]<br>
                        0-aut-wien-vol-01-client-3: Server lk version =
                        1<br>
                        [2016-01-02 11:44:24.049632] I<br>
                        [client-handshake.c:1200:client_setvolume_cbk]<br>
                        0-aut-wien-vol-01-client-4: Connected to
                        aut-wien-vol-01-client-4,<br>
                        attached to remote volume '/gluster-export'.<br>
                        [2016-01-02 11:44:24.049638] I<br>
                        [client-handshake.c:1210:client_setvolume_cbk]<br>
                        0-aut-wien-vol-01-client-4: Server and Client
                        lk-version numbers are<br>
                        not same, reopening the fds<br>
                        [2016-01-02 11:44:24.052103] I
                        [fuse-bridge.c:5086:fuse_graph_setup]<br>
                        0-fuse: switched to graph 0<br>
                        [2016-01-02 11:44:24.052150] I<br>
[client-handshake.c:188:client_set_lk_version_cbk]<br>
                        0-aut-wien-vol-01-client-2: Server lk version =
                        1<br>
                        [2016-01-02 11:44:24.052163] I<br>
[client-handshake.c:188:client_set_lk_version_cbk]<br>
                        0-aut-wien-vol-01-client-4: Server lk version =
                        1<br>
                        [2016-01-02 11:44:24.052192] I<br>
[client-handshake.c:188:client_set_lk_version_cbk]<br>
                        0-aut-wien-vol-01-client-1: Server lk version =
                        1<br>
                        [2016-01-02 11:44:24.052204] I
                        [fuse-bridge.c:4015:fuse_init]<br>
                        0-glusterfs-fuse: FUSE inited with protocol
                        versions: glusterfs 7.22<br>
                        kernel 7.20<br>
                        [2016-01-02 11:44:24.053991] I<br>
                        [afr-common.c:1491:afr_local_discovery_cbk]<br>
                        0-aut-wien-vol-01-replicate-2: selecting local
                        read_child<br>
                        aut-wien-vol-01-client-5<br>
                        [2016-01-02 11:45:48.613563] W<br>
                        [client-rpc-fops.c:306:client3_3_mkdir_cbk]<br>
                        0-aut-wien-vol-01-client-5: remote operation
                        failed: File exists.<br>
                        Path: /keys<br>
                        [2016-01-02 11:45:48.614131] W<br>
                        [client-rpc-fops.c:306:client3_3_mkdir_cbk]<br>
                        0-aut-wien-vol-01-client-4: remote operation
                        failed: File exists.<br>
                        Path: /keys<br>
                        [2016-01-02 11:45:48.614436] W
                        [fuse-bridge.c:1261:fuse_err_cbk]<br>
                        0-glusterfs-fuse: 12: SETXATTR()<br>
                        /.gfid/00000000-0000-0000-0000-000000000001
                        =&gt; -1 (File exists)<br>
                        ...<br>
                        <br>
                        <br>
                        [ 13:41:40 ] - root@gluster-ger-ber-07<br>
                        /var/log/glusterfs/geo-replication/ger-ber-01
                        $gluster volume<br>
                        geo-replication ger-ber-01
                        gluster-wien-07::aut-wien-vol-01 status<br>
                        detail<br>
                        <br>
                        MASTER NODE           MASTER VOL    MASTER BRICK<br>
                        SLAVE                                   STATUS 
                           CHECKPOINT<br>
                        STATUS    CRAWL STATUS    FILES SYNCD    FILES
                        PENDING BYTES<br>
                        PENDING    DELETES PENDING    FILES SKIPPED<br>
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------<br>
                        <br>
                        gluster-ger-ber-07    ger-ber-01   
                        /gluster-export<br>
                        gluster-wien-07-int::aut-wien-vol-01    Active
                        N/A<br>
                        Hybrid Crawl    10743644 8192 0                0
                        0<br>
                        gluster-ger-ber-11    ger-ber-01   
                        /gluster-export<br>
                        gluster-wien-03-int::aut-wien-vol-01    Active
                        N/A<br>
                        Hybrid Crawl    16037091 8192 0                0
                        0<br>
                        gluster-ger-ber-10    ger-ber-01   
                        /gluster-export<br>
                        gluster-wien-02-int::aut-wien-vol-01    Passive
                        N/A<br>
                        N/A             0 0 0                0 0<br>
                        gluster-ger-ber-12    ger-ber-01   
                        /gluster-export<br>
                        gluster-wien-06-int::aut-wien-vol-01    Passive
                        N/A<br>
                        N/A             0 0 0                0 0<br>
                        gluster-ger-ber-09    ger-ber-01   
                        /gluster-export<br>
                        gluster-wien-05-int::aut-wien-vol-01    Active
                        N/A<br>
                        Hybrid Crawl    16180514 8192 0                0
                        0<br>
                        gluster-ger-ber-08    ger-ber-01   
                        /gluster-export<br>
                        gluster-wien-04-int::aut-wien-vol-01    Passive
                        N/A<br>
                        N/A             0 0 0                0 0<br>
                        <br>
                        <br>
                        [ 13:41:55 ] - root@gluster-ger-ber-07<br>
                        /var/log/glusterfs/geo-replication/ger-ber-01
                        $gluster volume<br>
                        geo-replication ger-ber-01
                        gluster-wien-07::aut-wien-vol-01 config<br>
                        special_sync_mode: partial<br>
                        state_socket_unencoded:<br>
/var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-07_aut-wien-vol-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01.socket<br>
                        gluster_log_file:<br>
/var/log/glusterfs/geo-replication/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01.gluster.log<br>
                        ssh_command: ssh -p 2503
                        -oPasswordAuthentication=no<br>
                        -oStrictHostKeyChecking=no -i<br>
                        /var/lib/glusterd/geo-replication/secret.pem<br>
                        ignore_deletes: true<br>
                        change_detector: changelog<br>
                        ssh_command_tar: ssh -p 2503
                        -oPasswordAuthentication=no<br>
                        -oStrictHostKeyChecking=no -i<br>
                        /var/lib/glusterd/geo-replication/tar_ssh.pem<br>
                        state_file:<br>
/var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-07_aut-wien-vol-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01.status<br>
                        remote_gsyncd: /nonexistent/gsyncd<br>
                        log_file:<br>
/var/log/glusterfs/geo-replication/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01.log<br>
                        changelog_log_file:<br>
/var/log/glusterfs/geo-replication/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01-changes.log<br>
                        socketdir: /var/run<br>
                        working_dir:<br>
/var/lib/misc/glusterfsd/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01<br>
                        state_detail_file:<br>
/var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-07_aut-wien-vol-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01-detail.status<br>
                        session_owner:
                        6a071cfa-b150-4f0b-b1ed-96ab5d4bd671<br>
                        gluster_command_dir: /usr/sbin/<br>
                        pid_file:<br>
/var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-07_aut-wien-vol-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01.pid<br>
                        georep_session_working_dir:<br>
/var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-07_aut-wien-vol-01/<br>
                        gluster_params: aux-gfid-mount<br>
                        volume_id: 6a071cfa-b150-4f0b-b1ed-96ab5d4bd671<br>
                        [ 13:42:11 ] - root@gluster-ger-ber-07<br>
                        /var/log/glusterfs/geo-replication/ger-ber-01 $<br>
                        <br>
                        <br>
                        _______________________________________________<br>
                        Gluster-users mailing list<br>
                        <a moz-do-not-send="true"
                          href="mailto:Gluster-users@gluster.org"
                          target="_blank">Gluster-users@gluster.org</a><br>
                        <a moz-do-not-send="true"
                          href="http://www.gluster.org/mailman/listinfo/gluster-users"
                          rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
                      </blockquote>
                    </blockquote>
                    _______________________________________________<br>
                    Gluster-users mailing list<br>
                    <a moz-do-not-send="true"
                      href="mailto:Gluster-users@gluster.org"
                      target="_blank">Gluster-users@gluster.org</a><br>
                    <a moz-do-not-send="true"
                      href="http://www.gluster.org/mailman/listinfo/gluster-users"
                      rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
                  </blockquote>
                </div>
              </div>
            </blockquote>
            <div class="HOEnZb">
              <div class="h5">
                <br>
                _______________________________________________<br>
                Gluster-users mailing list<br>
                <a moz-do-not-send="true"
                  href="mailto:Gluster-users@gluster.org"
                  target="_blank">Gluster-users@gluster.org</a><br>
                <a moz-do-not-send="true"
                  href="http://www.gluster.org/mailman/listinfo/gluster-users"
                  rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
              </div>
            </div>
          </blockquote>
        </div>
        <br>
      </div>
    </blockquote>
    <br>
  </body>
</html>