<html>
  <head>
    <meta content="text/html; charset=utf-8" http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    Hi Aytac,<br>
    <br>
    Two gluster server nodes. Two volumes, each being a replica 2 of one
    brick.<br>
    <br>
    One volume is currently still being prepared, so I won't mention it
    any further.<br>
    <br>
    The other node is used as the main storage domain for our RHEV
    environment, which connects natively using the gluster client. The
    configuration at the RHEV end points to one node, with the second
    node configured as a failover. Clearly, any changes to that volume
    require extreme caution and planned downtime.<br>
    <br>
    At this stage the VMs do not directly access the gluster storage,
    although one VM will soon have a gluster volume attached to it (the
    one that's still being prepared), as it requires a lot of storage
    space.<br>
    <br>
    We use RHEV bare-metal hypervisors, so there is no option to
    upgrade, other than to install new versions as they become
    available. They are installed from an ISO, just like a regular
    distro, but these are extremely cut down versions of RHEL6, with all
    the good stuff left out. They could potentially be upgraded by
    building the client from source and copying it over but I'm not
    going down that road, for multiple reasons.<br>
    <br>
    regards,<br>
    John<br>
    <br>
    <br>
    <div class="moz-cite-prefix">On 25/02/15 10:36, aytac zeren wrote:<br>
    </div>
    <blockquote
cite="mid:CAK2sgnmT0YdxeFGiYT0tQJBxUsu1uQ2WfRArd4rSp09BqypMVA@mail.gmail.com"
      type="cite">
      <div dir="ltr">Hi John,
        <div><br>
        </div>
        <div>Would you please share your scenario? </div>
        <div><br>
        </div>
        <div>* How many nodes are running as gluster server?</div>
        <div>* Which application is accesing gluster volume by which
          means (NFS, CIFS, Gluster Client)?</div>
        <div>* Are you accessing volume through a client or the clients
          are accessing volume themselves (like KVM nodes)</div>
        <div><br>
        </div>
        <div>Best Regards</div>
        <div>Aytac</div>
      </div>
      <div class="gmail_extra"><br>
        <div class="gmail_quote">On Wed, Feb 25, 2015 at 1:18 AM, John
          Gardeniers <span dir="ltr">&lt;<a moz-do-not-send="true"
              href="mailto:jgardeniers@objectmastery.com"
              target="_blank">jgardeniers@objectmastery.com</a>&gt;</span>
          wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0
            .8ex;border-left:1px #ccc solid;padding-left:1ex">
            <div bgcolor="#FFFFFF" text="#000000"> Problem solved, more
              or less.<br>
              <br>
              After reading Aytac's comment about 3.6.2 not being
              considered stable yet I removed it from the new node, 
              removed /var/lib/glusterd/, rebooted (just to be sure) and
              installed 3.5.3. After detaching and re-probing the peer
              the replace-brick command worked and the volume is
              currently happily undergoing a self-heal. At a later and
              more convenient time I'll upgrade the 3.4.2 node to the
              same version. As previously stated, I cannot upgrade the
              clients, so they will just have to stay where they are.<span
                class=""><br>
                <br>
                regards,<br>
                John<br>
                <br>
                <br>
                <div>On 25/02/15 08:27, aytac zeren wrote:<br>
                </div>
              </span>
              <blockquote type="cite">
                <div>
                  <div class="h5">
                    <div dir="ltr">
                      <div>Hi John,</div>
                      <div><br>
                      </div>
                      3.6.2 is a major release and introduces some new
                      features in cluster wide concept. Additionally it
                      is not stable yet. The best way of doing it would
                      be establishing another 3.6.2 cluster, accessing
                      3.4.0 cluster via nfs or native client, and
                      copying content to 3.6.2 cluster gradually. While
                      your volume size decreases on 3.4.0 cluster, you
                      can unmount 3.4.0 members from cluster, upgrade
                      them and add 3.6.2 trusted pool with brick. Please
                      be careful while doing this operation, as number
                      of nodes in your cluster should be reliable with
                      your cluster design. (Stripped, Replicated,
                      Distributed or a combination of them).
                      <div><br>
                      </div>
                      <div>Notice: I don't take any responsibility on
                        the actions you have undertaken with regards to
                        my recommendations, as my recommendations are
                        general and does not take your archtiectural
                        design into consideration.</div>
                      <div><br>
                      </div>
                      <div>BR</div>
                      <div>Aytac</div>
                    </div>
                    <div class="gmail_extra"><br>
                      <div class="gmail_quote">On Tue, Feb 24, 2015 at
                        11:19 PM, John Gardeniers <span dir="ltr">&lt;<a
                            moz-do-not-send="true"
                            href="mailto:jgardeniers@objectmastery.com"
                            target="_blank">jgardeniers@objectmastery.com</a>&gt;</span>
                        wrote:<br>
                        <blockquote class="gmail_quote" style="margin:0
                          0 0 .8ex;border-left:1px #ccc
                          solid;padding-left:1ex">Hi All,<br>
                          <br>
                          We have a gluster volume consisting of a
                          single brick, using replica 2. Both nodes are
                          currently running gluster 3.4.2 and I wish to
                          replace one of the nodes with a new server
                          (rigel), which has gluster 3.6.2<br>
                          <br>
                          Following this link:<br>
                          <br>
                          <a moz-do-not-send="true"
href="https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Replacing_an_Old_Brick_with_a_New_Brick_on_a_Replicate_or_Distribute-replicate_Volume.html"
                            target="_blank">https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Replacing_an_Old_Brick_with_a_New_Brick_on_a_Replicate_or_Distribute-replicate_Volume.html</a><br>
                          <br>
                          I tried to do a replace brick but got "volume
                          replace-brick: failed: Host rigel is not in
                          'Peer in Cluster' state". Is this due to a
                          version incompatibility or is it due to some
                          other issue? A bit of googling reveals the
                          error message in bug reports but I've not yet
                          found anything that applies to this specific
                          case.<br>
                          <br>
                          Incidentally, the clients (RHEV bare metal
                          hypervisors, so we have no upgrade option) are
                          running 3.4.0. Will this be a problem if the
                          nodes are on 3.6.2?<br>
                          <br>
                          regards,<br>
                          John<br>
                          <br>
_______________________________________________<br>
                          Gluster-users mailing list<br>
                          <a moz-do-not-send="true"
                            href="mailto:Gluster-users@gluster.org"
                            target="_blank">Gluster-users@gluster.org</a><br>
                          <a moz-do-not-send="true"
                            href="http://www.gluster.org/mailman/listinfo/gluster-users"
                            target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
                        </blockquote>
                      </div>
                      <br>
                    </div>
                    <br clear="all">
                  </div>
                </div>
                <span class="">
______________________________________________________________________<br>
                  This email has been scanned by the Symantec Email
                  Security.cloud service.<br>
                  For more information please visit <a
                    moz-do-not-send="true"
                    href="http://www.symanteccloud.com" target="_blank">http://www.symanteccloud.com</a><br>
______________________________________________________________________<br>
                </span></blockquote>
              <br>
            </div>
          </blockquote>
        </div>
        <br>
      </div>
      <br clear="all">
______________________________________________________________________<br>
      This email has been scanned by the Symantec Email Security.cloud
      service.<br>
      For more information please visit <a class="moz-txt-link-freetext" href="http://www.symanteccloud.com">http://www.symanteccloud.com</a><br>
______________________________________________________________________<br>
    </blockquote>
    <br>
  </body>
</html>