<html>
  <head>
    <meta content="text/html; charset=windows-1252"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <br>
    <div class="moz-cite-prefix">On 02/07/2015 01:39 AM, Humble Devassy
      Chirammal wrote:<br>
    </div>
    <blockquote
cite="mid:CAEAGfOUOHAm0UvE+DPuzY4GKs7m+z5y8U+AkAb_UQCOcBF_rYg@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div>Lala,<br>
          &gt;<br>
          Gluster 3.4 server bits with 3.6 client bits should work fine.
          <br>
          &gt;<br>
          <br>
        </div>
        Have you tested this configuration?  iic, the 'AFR' module  ( 
        replication)  introduced its 'Version 2' implementation in 3.6
        which is not compatible with its older version. The GlusterFS
        3.4 &amp; 3.5 versions are shipped with "AFR V1" , so I really
        doubt the mentioned configuration will work perfectly.  AFR guys
        can confirm though.<br>
      </div>
      <div class="gmail_extra"><br clear="all">
        <div>
          <div class="gmail_signature">
            <div dir="ltr">
              <div>--Humble<br>
              </div>
              <br>
            </div>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
    That is  right. Usually newer clients would work with older servers
    but  3.6 saw a complete rewrite of AFR (afr-v2) which is not fully
    compatible with AFR-v1. It would be best to update the all the
    clients to 3.6 as well.<br>
    -Ravi<br>
    <blockquote
cite="mid:CAEAGfOUOHAm0UvE+DPuzY4GKs7m+z5y8U+AkAb_UQCOcBF_rYg@mail.gmail.com"
      type="cite">
      <div class="gmail_extra">
        <br>
        <div class="gmail_quote">On Fri, Feb 6, 2015 at 7:01 PM,
          Lalatendu Mohanty <span dir="ltr">&lt;<a
              moz-do-not-send="true" href="mailto:lmohanty@redhat.com"
              target="_blank">lmohanty@redhat.com</a>&gt;</span> wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0
            .8ex;border-left:1px #ccc solid;padding-left:1ex">
            <div bgcolor="#FFFFFF" text="#000000"><span class="">
                <div>On 02/06/2015 08:11 AM, Humble Devassy Chirammal
                  wrote:<br>
                </div>
                <blockquote type="cite">
                  <div dir="ltr">
                    <div><span></span><span>On 02/05/2015 11:56 PM, Nux!
                        wrote:<br>
                        <blockquote class="gmail_quote"
                          style="margin:0px 0px 0px
                          0.8ex;border-left:1px solid
                          rgb(204,204,204);padding-left:1ex"> Thanks for
                          sharing.<br>
                          Any idea if 3.6.2 still is compatible with
                          v3.4 servers?<br>
                        </blockquote>
                        <br>
                      </span> &gt;You mean 3.6.2 client bits with v3.4
                      servers? yes, it should work fine.<br>
                      <br>
                      <br>
                    </div>
                    <div>afacit, this will *not* work and its *not*
                      supported. <br>
                    </div>
                    <div><br>
                      <br>
                    </div>
                  </div>
                </blockquote>
                <br>
              </span> Humble,<br>
              <br>
              Gluster 3.4 server bits with 3.6 client bits should work
              fine. <br>
              <br>
              But I think the reserve (i.e. 3.6 server bits with older
              client bits) are not compatible because of below issues<br>
              <ul>
                <li>Older clients can not mount the newly created volume
                  on 3.6 . This is because readdir-ahead will be enabled
                  on the volume by default which isn't present in older
                  clients.<br>
                </li>
                <li>We can't run rebalance on any volume created with
                  3.6 bits   ( with or without readdir-ahead) when older
                  clients are connected.  The rebalance command will
                  error out if older clients are connected.</li>
              </ul>
              <p>Thanks,<br>
                Lala<br>
              </p>
              <div>
                <div class="h5">
                  <blockquote type="cite">
                    <div dir="ltr">
                      <div><br>
                      </div>
                    </div>
                    <div class="gmail_extra"><br clear="all">
                      <div>
                        <div>
                          <div dir="ltr">
                            <div>--Humble<br>
                            </div>
                            <br>
                          </div>
                        </div>
                      </div>
                      <br>
                      <div class="gmail_quote">On Fri, Feb 6, 2015 at
                        5:02 AM, Lalatendu Mohanty <span dir="ltr">&lt;<a
                            moz-do-not-send="true"
                            href="mailto:lmohanty@redhat.com"
                            target="_blank">lmohanty@redhat.com</a>&gt;</span>
                        wrote:<br>
                        <blockquote class="gmail_quote" style="margin:0
                          0 0 .8ex;border-left:1px #ccc
                          solid;padding-left:1ex">+ gluster-users<span><br>
                            On 02/05/2015 11:56 PM, Nux! wrote:<br>
                            <blockquote class="gmail_quote"
                              style="margin:0 0 0 .8ex;border-left:1px
                              #ccc solid;padding-left:1ex"> Thanks for
                              sharing.<br>
                              Any idea if 3.6.2 still is compatible with
                              v3.4 servers?<br>
                            </blockquote>
                            <br>
                          </span> You mean 3.6.2 client bits with v3.4
                          servers? yes, it should work fine.<br>
                          <br>
                          -Lala
                          <div>
                            <div><br>
                              <blockquote class="gmail_quote"
                                style="margin:0 0 0 .8ex;border-left:1px
                                #ccc solid;padding-left:1ex"> --<br>
                                Sent from the Delta quadrant using Borg
                                technology!<br>
                                <br>
                                Nux!<br>
                                <a moz-do-not-send="true"
                                  href="http://www.nux.ro"
                                  target="_blank">www.nux.ro</a><br>
                                <br>
                                ----- Original Message -----<br>
                                <blockquote class="gmail_quote"
                                  style="margin:0 0 0
                                  .8ex;border-left:1px #ccc
                                  solid;padding-left:1ex"> From:
                                  "Karanbir Singh" &lt;<a
                                    moz-do-not-send="true"
                                    href="mailto:mail-lists@karan.org"
                                    target="_blank">mail-lists@karan.org</a>&gt;<br>
                                  To: "The CentOS developers mailing
                                  list." &lt;<a moz-do-not-send="true"
                                    href="mailto:centos-devel@centos.org"
                                    target="_blank">centos-devel@centos.org</a>&gt;<br>
                                  Sent: Thursday, 5 February, 2015
                                  22:11:53<br>
                                  Subject: [CentOS-devel] Gluster
                                  Updates for Storage SIG<br>
                                  The CentOS Storage SIG, has updated
                                  Gluster to 3.6.2 in the community<br>
                                  testing repos. You can find more
                                  information on howto get started with<br>
                                  this repo at :<br>
                                  <a moz-do-not-send="true"
href="http://wiki.centos.org/SpecialInterestGroup/Storage/gluster-Quickstart"
                                    target="_blank">http://wiki.centos.org/SpecialInterestGroup/Storage/gluster-Quickstart</a><br>
                                  <br>
                                  The Following rpms have been updated:<br>
                                  <br>
                                  CentOS-6<br>
                                  i386/glusterfs-3.6.2-2.el6.i386.rpm<br>
i386/glusterfs-api-3.6.2-2.el6.i386.rpm<br>
i386/glusterfs-api-devel-3.6.2-2.el6.i386.rpm<br>
i386/glusterfs-cli-3.6.2-2.el6.i386.rpm<br>
i386/glusterfs-devel-3.6.2-2.el6.i386.rpm<br>
i386/glusterfs-extra-xlators-3.6.2-2.el6.i386.rpm<br>
i386/glusterfs-fuse-3.6.2-2.el6.i386.rpm<br>
i386/glusterfs-geo-replication-3.6.2-2.el6.i386.rpm<br>
i386/glusterfs-libs-3.6.2-2.el6.i386.rpm<br>
i386/glusterfs-rdma-3.6.2-2.el6.i386.rpm<br>
i386/glusterfs-resource-agents-3.6.2-2.el6.noarch.rpm<br>
i386/glusterfs-server-3.6.2-2.el6.i386.rpm<br>
                                  <br>
x86_64/glusterfs-3.6.2-2.el6.x86_64.rpm<br>
x86_64/glusterfs-api-3.6.2-2.el6.i386.rpm<br>
x86_64/glusterfs-api-3.6.2-2.el6.x86_64.rpm<br>
x86_64/glusterfs-api-devel-3.6.2-2.el6.i386.rpm<br>
x86_64/glusterfs-api-devel-3.6.2-2.el6.x86_64.rpm<br>
x86_64/glusterfs-cli-3.6.2-2.el6.x86_64.rpm<br>
x86_64/glusterfs-devel-3.6.2-2.el6.i386.rpm<br>
x86_64/glusterfs-devel-3.6.2-2.el6.x86_64.rpm<br>
x86_64/glusterfs-extra-xlators-3.6.2-2.el6.x86_64.rpm<br>
x86_64/glusterfs-fuse-3.6.2-2.el6.x86_64.rpm<br>
x86_64/glusterfs-geo-replication-3.6.2-2.el6.x86_64.rpm<br>
x86_64/glusterfs-libs-3.6.2-2.el6.i386.rpm<br>
x86_64/glusterfs-libs-3.6.2-2.el6.x86_64.rpm<br>
x86_64/glusterfs-rdma-3.6.2-2.el6.x86_64.rpm<br>
x86_64/glusterfs-resource-agents-3.6.2-2.el6.noarch.rpm<br>
x86_64/glusterfs-server-3.6.2-2.el6.x86_64.rpm<br>
                                  <br>
                                  CentOS-7<br>
x86_64/glusterfs-3.6.2-2.el7.x86_64.rpm<br>
x86_64/glusterfs-api-3.6.2-2.el7.x86_64.rpm<br>
x86_64/glusterfs-api-devel-3.6.2-2.el7.x86_64.rpm<br>
x86_64/glusterfs-cli-3.6.2-2.el7.x86_64.rpm<br>
x86_64/glusterfs-devel-3.6.2-2.el7.x86_64.rpm<br>
x86_64/glusterfs-extra-xlators-3.6.2-2.el7.x86_64.rpm<br>
x86_64/glusterfs-fuse-3.6.2-2.el7.x86_64.rpm<br>
x86_64/glusterfs-geo-replication-3.6.2-2.el7.x86_64.rpm<br>
x86_64/glusterfs-libs-3.6.2-2.el7.x86_64.rpm<br>
x86_64/glusterfs-rdma-3.6.2-2.el7.x86_64.rpm<br>
x86_64/glusterfs-resource-agents-3.6.2-2.el7.noarch.rpm<br>
x86_64/glusterfs-server-3.6.2-2.el7.x86_64.rpm<br>
                                  <br>
                                  <br>
                                  This release fixes the following bugs.
                                  Below containt copied from<br>
                                  GlusterFS upstream release mail [1].<br>
                                  <br>
                                  1184191 - Cluster/DHT : Fixed crash
                                  due to null deref<br>
                                  1180404 - nfs server restarts when a
                                  snapshot is deactivated<br>
                                  1180411 - CIFS:[USS]: glusterfsd OOM
                                  killed when 255 snapshots were<br>
                                  browsed at CIFS mount and Control+C is
                                  issued<br>
                                  1180070 - [AFR] getfattr on fuse mount
                                  gives error : Software caused<br>
                                  connection abort<br>
                                  1175753 - [readdir-ahead]: indicate
                                  EOF for readdirp<br>
                                  1175752 - [USS]: On a successful
                                  lookup, snapd logs are filled with<br>
                                  Warnings "dict OR key (entry-point) is
                                  NULL"<br>
                                  1175749 - glusterfs client crashed
                                  while migrating the fds<br>
                                  1179658 - Add brick fails if parent
                                  dir of new brick and existing brick<br>
                                  is same and volume was accessed using
                                  libgfapi and smb.<br>
                                  1146524 - <a moz-do-not-send="true"
                                    href="http://glusterfs.spec.in"
                                    target="_blank">glusterfs.spec.in</a>
                                  - synch minor diffs with fedora
                                  dist-git<br>
                                  glusterfs.spec<br>
                                  1175744 - [USS]: Unable to access
                                  .snaps after snapshot restore after<br>
                                  directories were deleted and recreated<br>
                                  1175742 - [USS]: browsing .snaps
                                  directory with CIFS fails with<br>
                                  "Invalid argument"<br>
                                  1175739 - [USS]: Non root user who has
                                  no access to a directory, from<br>
                                  NFS mount, is able to access the files
                                  under .snaps under that directory<br>
                                  1175758 - [USS] : Rebalance process
                                  tries to connect to snapd and in<br>
                                  case when snapd crashes it might
                                  affect rebalance process<br>
                                  1175765 - USS]: When snapd is crashed
                                  gluster volume stop/delete<br>
                                  operation fails making the cluster in
                                  inconsistent state<br>
                                  1173528 - Change in volume heal info
                                  command output<br>
                                  1166515 - [Tracker] RDMA support in
                                  glusterfs<br>
                                  1166505 - mount fails for nfs protocol
                                  in rdma volumes<br>
                                  1138385 - [DHT:REBALANCE]: Rebalance
                                  failures are seen with error<br>
                                  message " remote operation failed:
                                  File exists"<br>
                                  1177418 - entry self-heal in 3.5 and
                                  3.6 are not compatible<br>
                                  1170954 - Fix mutex problems reported
                                  by coverity scan<br>
                                  1177899 - nfs: ls shows "Permission
                                  denied" with root-squash<br>
                                  1175738 - [USS]: data unavailability
                                  for a period of time when USS is<br>
                                  enabled/disabled<br>
                                  1175736 - [USS]:After deactivating a
                                  snapshot trying to access the<br>
                                  remaining activated snapshots from NFS
                                  mount gives 'Invalid argument' error<br>
                                  1175735 - [USS]: snapd process is not
                                  killed once the glusterd comes back<br>
                                  1175733 - [USS]: If the snap name is
                                  same as snap-directory than cd to<br>
                                  virtual snap directory fails<br>
                                  1175756 - [USS] : Snapd crashed while
                                  trying to access the snapshots<br>
                                  under .snaps directory<br>
                                  1175755 - SNAPSHOT[USS]:gluster volume
                                  set for uss doesnot check any<br>
                                  boundaries<br>
                                  1175732 - [SNAPSHOT]: nouuid is
                                  appended for every snapshoted brick<br>
                                  which causes duplication if the
                                  original brick has already nouuid<br>
                                  1175730 - [USS]: creating
                                  file/directories under .snaps shows
                                  wrong<br>
                                  error message<br>
                                  1175754 - [SNAPSHOT]: before the snap
                                  is marked to be deleted if the<br>
                                  node goes down than the snaps are
                                  propagated on other nodes and glusterd<br>
                                  hungs<br>
                                  1159484 - ls -alR can not heal the
                                  disperse volume<br>
                                  1138897 - NetBSD port<br>
                                  1175728 - [USS]: All uss related logs
                                  are reported under<br>
                                  /var/log/glusterfs, it makes sense to
                                  move it into subfolder<br>
                                  1170548 - [USS] : don't display the
                                  snapshots which are not activated<br>
                                  1170921 - [SNAPSHOT]: snapshot should
                                  be deactivated by default when<br>
                                  created<br>
                                  1175694 - [SNAPSHOT]: snapshoted
                                  volume is read only but it shows rw<br>
                                  attributes in mount<br>
                                  1161885 - Possible file corruption on
                                  dispersed volumes<br>
                                  1170959 - EC_MAX_NODES is defined
                                  incorrectly<br>
                                  1175645 - [USS]: Typo error in the
                                  description for USS under "gluster<br>
                                  volume set help"<br>
                                  1171259 - mount.glusterfs does not
                                  understand -n option<br>
                                  <br>
                                  [1] <a moz-do-not-send="true"
href="http://www.gluster.org/pipermail/gluster-devel/2015-January/043617.html"
                                    target="_blank">http://www.gluster.org/pipermail/gluster-devel/2015-January/043617.html</a><br>
                                  <br>
                                  --<br>
                                  Karanbir Singh<br>
                                  +44-207-0999389 | <a
                                    moz-do-not-send="true"
                                    href="http://www.karan.org/"
                                    target="_blank">http://www.karan.org/</a>
                                  | <a moz-do-not-send="true"
                                    href="http://twitter.com/kbsingh"
                                    target="_blank">twitter.com/kbsingh</a><br>
                                  GnuPG Key : <a moz-do-not-send="true"
href="http://www.karan.org/publickey.asc" target="_blank">http://www.karan.org/publickey.asc</a><br>
_______________________________________________<br>
                                  CentOS-devel mailing list<br>
                                  <a moz-do-not-send="true"
                                    href="mailto:CentOS-devel@centos.org"
                                    target="_blank">CentOS-devel@centos.org</a><br>
                                  <a moz-do-not-send="true"
                                    href="http://lists.centos.org/mailman/listinfo/centos-devel"
                                    target="_blank">http://lists.centos.org/mailman/listinfo/centos-devel</a><br>
                                </blockquote>
_______________________________________________<br>
                                CentOS-devel mailing list<br>
                                <a moz-do-not-send="true"
                                  href="mailto:CentOS-devel@centos.org"
                                  target="_blank">CentOS-devel@centos.org</a><br>
                                <a moz-do-not-send="true"
                                  href="http://lists.centos.org/mailman/listinfo/centos-devel"
                                  target="_blank">http://lists.centos.org/mailman/listinfo/centos-devel</a><br>
                              </blockquote>
                              <br>
                            </div>
                          </div>
_______________________________________________<br>
                          Gluster-users mailing list<br>
                          <a moz-do-not-send="true"
                            href="mailto:Gluster-users@gluster.org"
                            target="_blank">Gluster-users@gluster.org</a><br>
                          <a moz-do-not-send="true"
                            href="http://www.gluster.org/mailman/listinfo/gluster-users"
                            target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
                        </blockquote>
                      </div>
                      <br>
                    </div>
                  </blockquote>
                  <br>
                </div>
              </div>
            </div>
          </blockquote>
        </div>
        <br>
      </div>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="http://www.gluster.org/mailman/listinfo/gluster-users">http://www.gluster.org/mailman/listinfo/gluster-users</a></pre>
    </blockquote>
    <br>
  </body>
</html>