<html>
  <head>
    <meta content="text/html; charset=windows-1252"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    This looks like SSH configuration issue. Please cleanup all the
    lines in /root/.ssh/authorized_keys which are connected from Master
    nodes which do not starts with "command="<br>
    <br>
    Please let us know, which ssh key is used to create passwordless SSH
    from Master node to Slave node.<br>
    <br>
    To resolve the issue.<br>
    <br>
    1. In "neptune" node,<br>
    cat /var/lib/glusterd/geo-replication/common_secret.pem.pub<br>
    <br>
    2. Open /root/.ssh/authorized_keys file in "volume01-replicate"
    node, and see the keys present or not(Output from previous step)<br>
    <br>
    3. There should not be any other lines which has the same key
    without "command=" in the beginning<br>
    For example<br>
    ssh-rsa
    AAAAB3NzaC1yc2EAAAADAQABAAABAQDhR4kp978pjze9y1ozySB6jgz2VjeLKnCWIIsZ7NFCue1S4lCU7TgNg2g8FwfXR7LX4mRuLFtQeOkEN9kLGaiJZiN06oU2Jz3Y2gx6egxR5lGiumMg7QLPH1PQPJIfT8Qaz1znH+NlpM1BuivjfOsbtVWTBQpANq4uA8ooln2rLTKIzRGQrS6adUD6KwbjIpVEahJqkZf8YaiaTDJZdXdGGvT6YtytogPmuKwrJ+XujaRd49dDcjeOrcjkFxsf9/IuqBvbZYwW2hwTcqqtSHZfIwHaf6X9fhDizVX4WxPhToiK9LZaEF57hnPAa7bl2if9KFoOyfwZByTIwQPqjymv
    root@neptune<br>
    <br>
    If exists, remove that line.<br>
    <br>
    4. Try to connect from "neptune" node using following command to see
    the issue is resolved<br>
    ssh -i /var/lib/glusterd/geo-replication/secret.pem
    root@volume01-replicate<br>
    <br>
    If you see gsyncd messages then everything is normal. If you see ssh
    errors then issue is not resolved. <br>
    <br>
    Let us know if any questions.<br>
    <br>
    <pre class="moz-signature" cols="72">regards
Aravinda</pre>
    <div class="moz-cite-prefix">On 08/11/2015 03:08 AM, Don Ky wrote:<br>
    </div>
    <blockquote
cite="mid:CAJJbiV4PWNzaT9_Ee7a=a-k2jFpT-hYhSJ4pOPuyoT=ThXL3wg@mail.gmail.com"
      type="cite">
      <div dir="ltr">Hello all,
        <div><br>
        </div>
        <div>I've been struggling to get gluster-geo replicate
          functionality working for the last couple of days. I keep
          getting the following errors:</div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div>
          <div>2015-08-10 17:27:07.855817] E
            [resource(/gluster/volume1):222:errlog] Popen: command "ssh
            -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
            /var/lib/glusterd/geo-replication/secret.pem
            -oControlMaster=auto -S
            /tmp/gsyncd-aux-ssh-Cnh7xL/ee1e6b6c8823302e93454e632bd81fbe.sock
            <a moz-do-not-send="true"
              href="mailto:root@gluster02.example.com">root@gluster02.example.com</a>
            /nonexistent/gsyncd --session-owner
            50600483-7aa3-4fab-a66c-63350af607b0 -N --listen --timeout
            120 gluster://localhost:volume1-replicate" returned with
            127, saying:</div>
          <div>[2015-08-10 17:27:07.856066] E
            [resource(/gluster/volume1):226:logerr] Popen: ssh&gt; bash:
            /nonexistent/gsyncd: No such file or directory</div>
          <div>[2015-08-10 17:27:07.856441] I
            [syncdutils(/gluster/volume1):220:finalize] &lt;top&gt;:
            exiting.</div>
          <div>[2015-08-10 17:27:07.858120] I
            [repce(agent):92:service_loop] RepceServer: terminating on
            reaching EOF.</div>
          <div>[2015-08-10 17:27:07.858361] I
            [syncdutils(agent):220:finalize] &lt;top&gt;: exiting.</div>
          <div>[2015-08-10 17:27:07.858211] I
            [monitor(monitor):274:monitor] Monitor:
            worker(/gluster/volume1) died before establishing connection</div>
          <div>[2015-08-10 17:27:18.181344] I
            [monitor(monitor):221:monitor] Monitor:
            ------------------------------------------------------------</div>
          <div>[2015-08-10 17:27:18.181842] I
            [monitor(monitor):222:monitor] Monitor: starting gsyncd
            worker</div>
          <div>[2015-08-10 17:27:18.387790] I
            [gsyncd(/gluster/volume1):649:main_i] &lt;top&gt;: syncing:
            gluster://localhost:volume1 -&gt;
            <a class="moz-txt-link-freetext" href="ssh://root@gluster02.example.com:gluster://localhost:volume1-replicate">ssh://root@gluster02.example.com:gluster://localhost:volume1-replicate</a></div>
          <div>[2015-08-10 17:27:18.389427] D [gsyncd(agent):643:main_i]
            &lt;top&gt;: rpc_fd: '7,11,10,9'</div>
          <div>[2015-08-10 17:27:18.390553] I
            [changelogagent(agent):75:__init__] ChangelogAgent: Agent
            listining...</div>
          <div>[2015-08-10 17:27:18.418788] D
            [repce(/gluster/volume1):191:push] RepceClient: call
            8460:140341431777088:1439242038.42 __repce_version__() ...</div>
          <div>[2015-08-10 17:27:18.629983] E
            [syncdutils(/gluster/volume1):252:log_raise_exception]
            &lt;top&gt;: connection to peer is broken</div>
          <div>[2015-08-10 17:27:18.630651] W
            [syncdutils(/gluster/volume1):256:log_raise_exception]
            &lt;top&gt;: !!!!!!!!!!!!!</div>
          <div>[2015-08-10 17:27:18.630794] W
            [syncdutils(/gluster/volume1):257:log_raise_exception]
            &lt;top&gt;: !!! getting "No such file or directory" errors
            is most likely due to MISCONFIGURATION, please consult <a
              moz-do-not-send="true"
href="https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/chap-User_Guide-Geo_Rep-Preparation-Settingup_Environment.html"><a class="moz-txt-link-freetext" href="https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/chap-User_Guide-Geo_Rep-Preparation-Settingup_Environment.html">https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/chap-User_Guide-Geo_Rep-Preparation-Settingup_Environment.html</a></a></div>
          <div>[2015-08-10 17:27:18.630929] W
            [syncdutils(/gluster/volume1):265:log_raise_exception]
            &lt;top&gt;: !!!!!!!!!!!!!</div>
          <div>[2015-08-10 17:27:18.631129] E
            [resource(/gluster/volume1):222:errlog] Popen: command "ssh
            -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
            /var/lib/glusterd/geo-replication/secret.pem
            -oControlMaster=auto -S
            /tmp/gsyncd-aux-ssh-RPuEyN/ee1e6b6c8823302e93454e632bd81fbe.sock
            <a moz-do-not-send="true"
              href="mailto:root@gluster02.example.com">root@gluster02.example.com</a>
            /nonexistent/gsyncd --session-owner
            50600483-7aa3-4fab-a66c-63350af607b0 -N --listen --timeout
            120 gluster://localhost:volume1-replicate" returned with
            127, saying:</div>
          <div>[2015-08-10 17:27:18.631280] E
            [resource(/gluster/volume1):226:logerr] Popen: ssh&gt; bash:
            /nonexistent/gsyncd: No such file or directory</div>
          <div>[2015-08-10 17:27:18.631567] I
            [syncdutils(/gluster/volume1):220:finalize] &lt;top&gt;:
            exiting.</div>
          <div>[2015-08-10 17:27:18.633125] I
            [repce(agent):92:service_loop] RepceServer: terminating on
            reaching EOF.</div>
          <div>[2015-08-10 17:27:18.633183] I
            [monitor(monitor):274:monitor] Monitor:
            worker(/gluster/volume1) died before establishing connection</div>
          <div>[2015-08-10 17:27:18.633392] I
            [syncdutils(agent):220:finalize] &lt;top&gt;: exiting.</div>
        </div>
        <div><br>
        </div>
        <div>and the status is continuously faulty:</div>
        <div><br>
        </div>
        <div>
          <div>[root@neptune volume1]# gluster volume geo-replication
            volume01 gluster02::volume01-replicate status</div>
          <div><br>
          </div>
          <div>MASTER NODE    MASTER VOL    MASTER BRICK           SLAVE
            USER    SLAVE                              SLAVE NODE  
             STATUS    CRAWL STATUS    LAST_SYNCED</div>
          <div>-------------------------------------------------------------------------------------------------------------------------------------------------------</div>
          <div>neptune   volume01    /gluster/volume01   root        
             gluster02::volume01-replicate    N/A           Faulty  
             N/A             N/A</div>
        </div>
        <div><br>
        </div>
        <div>What I'm trying to accomplish is to mirror a volume from
          gluster01 (master) to gluster02 (slave). </div>
        <div><br>
        </div>
        <div>Here is a break down of the steps I took</div>
        <div><br>
        </div>
        <div>
          <div>yum -y install glusterfs-server glusterfs-geo-replication</div>
          <div>service glusterd start</div>
          <div><br>
          </div>
          <div>#gluster01</div>
          <div>gluster volume create volume1
            gluster01.example.com:/gluster/volume1</div>
          <div>gluster volume start volume1</div>
          <div><br>
          </div>
          <div>#gluster02</div>
          <div>gluster volume create volume1-replicate
            gluster02.example.com:/gluster/volume1-replicate</div>
          <div>gluster volume start volume1-replicate</div>
          <div><br>
          </div>
          <div><br>
          </div>
          <div>#geo replicate</div>
          <div>gluster system:: execute gsec_create</div>
          <div><br>
          </div>
          <div>#gluster01</div>
          <div>gluster volume geo-replication volume1
            gluster02::volume1-replicate create push-pem</div>
          <div>gluster volume geo-replication volume1
            gluster02::volume1-replicate start</div>
          <div>gluster volume geo-replication volume1
            gluster02::volume1-replicate status</div>
          <div><br>
          </div>
          <div>#mouting and testing</div>
          <div>mkdir /mnt/gluster</div>
          <div>mount -t glusterfs gluster01.example.com:/volume1
            /mnt/gluster</div>
          <div>mount -t glusterfs
            gluster02.example.com:/volume1-replicate /mnt/gluster</div>
          <div><br>
          </div>
          <div>#troubleshooting</div>
          <div>gluster volume geo-replication volume1
            gluster02::volume1-replicate config log-level DEBUG</div>
          <div>service glusterd restart</div>
          <div><br>
          </div>
          <div>gluster volume geo-replication volume1
            gluster02::volume1-replicate config</div>
        </div>
        <div><br>
        </div>
        <div>There was one step before running </div>
        <div><br>
        </div>
        <div>
          <div>gluster volume geo-replication volume1
            gluster02::volume1-replicate create push-pem</div>
        </div>
        <div><br>
        </div>
        <div>I copied the secret.pub to gluster02(the slave) and added
          it to .ssh/authorized_keys. I can ssh as root from gluster01
          to gluster02 fine. </div>
        <div><br>
        </div>
        <div>I'm currently running:</div>
        <div><br>
        </div>
        <div>
          <div>glusterfs-3.7.3-1.el7.x86_64</div>
          <div>glusterfs-cli-3.7.3-1.el7.x86_64</div>
          <div>glusterfs-libs-3.7.3-1.el7.x86_64</div>
          <div>glusterfs-client-xlators-3.7.3-1.el7.x86_64</div>
          <div>glusterfs-fuse-3.7.3-1.el7.x86_64</div>
          <div>glusterfs-server-3.7.3-1.el7.x86_64</div>
          <div>glusterfs-api-3.7.3-1.el7.x86_64</div>
          <div>glusterfs-geo-replication-3.7.3-1.el7.x86_64</div>
        </div>
        <div><br>
        </div>
        <div>on both slave and master servers. Both servers have ntp
          installed are in sync and patched. </div>
        <div><br>
        </div>
        <div>I can mount volume1 or volume1-replicate on each host and
          confirmed that iptables have been flushed. </div>
        <div><br>
        </div>
        <div>Not sure exactly what else to check at this point. There
          appeared to be another user with similar errors but the
          mailing list says he resolved it on his own. </div>
        <div><br>
        </div>
        <div>Any ideas? I'm completely lost on what could be issue. Some
          of the redhat docs mentioned it could be fuse but it looks
          like fuse is installed as part of gluster. </div>
        <div><br>
        </div>
        <div>Thanks </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div> </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div><br>
        </div>
      </div>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="http://www.gluster.org/mailman/listinfo/gluster-users">http://www.gluster.org/mailman/listinfo/gluster-users</a></pre>
    </blockquote>
    <br>
  </body>
</html>