<html>
  <head>
    <meta content="text/html; charset=windows-1252"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    and here's the log of the ESXi;<br>
    <br>
    2016-03-15T12:50:17.982Z cpu41:33260)WARNING: NFS: 4566: Short read
    for object b00f 60 6b5d5087 c5d851fc 4c474f3a 8efd48b3 9d4617b1
    a556c0bc 30c9df6b 16c12f10 250219b3f4146a7 431899d1 0 431200000000
    offset: 0x0 requested: 0x200 read: 0x95<br>
    2016-03-15T12:50:17.983Z cpu41:33260)WARNING: NFS: 4566: Short read
    for object b00f 60 6b5d5087 c5d851fc 4c474f3a 8efd48b3 9d4617b1
    a556c0bc 30c9df6b 16c12f10 250219b3f4146a7 431899d1 0 431200000000
    offset: 0x0 requested: 0x200 read: 0x95<br>
    2016-03-15T12:50:17.984Z cpu41:33260)WARNING: NFS: 4566: Short read
    for object b00f 60 6b5d5087 c5d851fc 4c474f3a 8efd48b3 9d4617b1
    a556c0bc 30c9df6b 16c12f10 250219b3f4146a7 431899d1 0 431200000000
    offset: 0x0 requested: 0x200 read: 0x95<br>
    2016-03-15T12:50:17.995Z cpu41:35990)WARNING: NFS: 4566: Short read
    for object b00f 60 6b5d5087 c5d851fc 4c474f3a 8efd48b3 9d4617b1
    a556c0bc 30c9df6b 16c12f10 250219b3f4146a7 431899d1 0 431200000000
    offset: 0x0 requested: 0x200 read: 0x95<br>
    2016-03-15T12:50:18.000Z cpu41:35990)WARNING: NFS: 4566: Short read
    for object b00f 60 6b5d5087 c5d851fc 4c474f3a 8efd48b3 9d4617b1
    a556c0bc 30c9df6b 16c12f10 250219b3f4146a7 431899d1 0 431200000000
    offset: 0x0 requested: 0x200 read: 0x95<br>
    2016-03-15T12:50:18.031Z cpu41:35990)WARNING: NFS: 4566: Short read
    for object b00f 60 6b5d5087 c5d851fc 4c474f3a 8efd48b3 9d4617b1
    a556c0bc 30c9df6b 16c12f10 250219b3f4146a7 431899d1 0 431200000000
    offset: 0x0 requested: 0x200 read: 0x95<br>
    2016-03-15T12:50:18.032Z cpu41:35990)WARNING: NFS: 4566: Short read
    for object b00f 60 6b5d5087 c5d851fc 4c474f3a 8efd48b3 9d4617b1
    a556c0bc 30c9df6b 16c12f10 250219b3f4146a7 431899d1 0 431200000000
    offset: 0x0 requested: 0x200 read: 0x95<br>
    2016-03-15T12:50:18.032Z cpu41:35990)WARNING: NFS: 4566: Short read
    for object b00f 60 6b5d5087 c5d851fc 4c474f3a 8efd48b3 9d4617b1
    a556c0bc 30c9df6b 16c12f10 250219b3f4146a7 431899d1 0 431200000000
    offset: 0x0 requested: 0x200 read: 0x95<br>
    2016-03-15T12:50:18.043Z cpu41:35990)WARNING: NFS: 4566: Short read
    for object b00f 60 6b5d5087 c5d851fc 4c474f3a 8efd48b3 9d4617b1
    a556c0bc 30c9df6b 16c12f10 250219b3f4146a7 431899d1 0 431200000000
    offset: 0x0 requested: 0x200 read: 0x95<br>
    2016-03-15T12:50:18.048Z cpu41:35990)WARNING: NFS: 4566: Short read
    for object b00f 60 6b5d5087 c5d851fc 4c474f3a 8efd48b3 9d4617b1
    a556c0bc 30c9df6b 16c12f10 250219b3f4146a7 431899d1 0 431200000000
    offset: 0x0 requested: 0x200 read: 0x95<br>
    <br>
    <div class="moz-signature"><font color="#3366ff"><font
          color="#000000">Respectfully<b><br>
          </b><b>Mahdi A. Mahdi</b></font></font><font color="#3366ff"><br>
      </font><font color="#3366ff"><font color="#000000"><br>
          Skype: </font><a href="mailto:mahdi.adnan@outlook.com"
          moz-do-not-send="true"><font color="#000000">mahdi.adnan@outlook.com</font></a></font></div>
    <div class="moz-cite-prefix">On 03/15/2016 03:06 PM, Mahdi Adnan
      wrote:<br>
    </div>
    <blockquote cite="mid:56E7FAC1.40101@earthlinktele.com" type="cite">
      <meta content="text/html; charset=windows-1252"
        http-equiv="Content-Type">
      [2016-03-15 14:12:01.421615] I [MSGID: 109036]
      [dht-common.c:8043:dht_log_new_layout_for_dir_selfheal] 0-v-dht:
      Setting layout of /New Virtual Machine_2 with [Subvol_name:
      v-replicate-0, Err: -1 , Start: 0 , Stop: 1431655764 , Hash: 1 ],
      [Subvol_name: v-replicate-1, Err: -1 , Start: 1431655765 , Stop:
      2863311529 , Hash: 1 ], [Subvol_name: v-replicate-2, Err: -1 ,
      Start: 2863311530 , Stop: 4294967295 , Hash: 1 ], <br>
      [2016-03-15 14:12:02.001167] I [MSGID: 109066]
      [dht-rename.c:1413:dht_rename] 0-v-dht: renaming /New Virtual
      Machine_2/New Virtual Machine.vmdk~
      (hash=v-replicate-2/cache=v-replicate-2) =&gt; /New Virtual
      Machine_2/New Virtual Machine.vmdk
      (hash=v-replicate-2/cache=v-replicate-2)<br>
      [2016-03-15 14:12:02.248164] W [MSGID: 112032]
      [nfs3.c:3622:nfs3svc_rmdir_cbk] 0-nfs: 3fed7d9f: /New Virtual
      Machine_2 =&gt; -1 (Directory not empty) [Directory not empty]<br>
      [2016-03-15 14:12:02.259015] W [MSGID: 112032]
      [nfs3.c:3622:nfs3svc_rmdir_cbk] 0-nfs: 3fed7da3: /New Virtual
      Machine_2 =&gt; -1 (Directory not empty) [Directory not empty]<br>
      <br>
      <br>
      <div class="moz-signature"><font color="#3366ff"><font
            color="#000000">Respectfully<b><br>
            </b><b>Mahdi A. Mahdi</b></font></font><font color="#3366ff"><br>
          <br>
        </font></div>
      <div class="moz-cite-prefix">On 03/15/2016 03:03 PM, Krutika
        Dhananjay wrote:<br>
      </div>
      <blockquote
cite="mid:CAPhYV8PChUQQFGM2xUiMOiROd7-uApUa1d0ZWoyU+h8N5M_A7A@mail.gmail.com"
        type="cite">
        <div dir="ltr">
          <div>Hmm ok. Could you share the nfs.log content?<br>
            <br>
          </div>
          -Krutika<br>
        </div>
        <div class="gmail_extra"><br>
          <div class="gmail_quote">On Tue, Mar 15, 2016 at 1:45 PM,
            Mahdi Adnan <span dir="ltr">&lt;<a moz-do-not-send="true"
                href="mailto:mahdi.adnan@earthlinktele.com"
                target="_blank">mahdi.adnan@earthlinktele.com</a>&gt;</span>
            wrote:<br>
            <blockquote class="gmail_quote" style="margin:0 0 0
              .8ex;border-left:1px #ccc solid;padding-left:1ex">
              <div text="#000000" bgcolor="#FFFFFF"> Okay, here's what i
                did;<br>
                <br>
                Volume Name: v<br>
                Type: Distributed-Replicate<br>
                Volume ID: b348fd8e-b117-469d-bcc0-56a56bdfc930<br>
                Status: Started<span class=""><br>
                  Number of Bricks: 3 x 2 = 6<br>
                  Transport-type: tcp<br>
                  Bricks:<br>
                </span><span class=""> Brick1: gfs001:/bricks/b001/v<br>
                  Brick2: gfs001:/bricks/b002/v<br>
                  Brick3: gfs001:/bricks/b003/v<br>
                </span> Brick4: gfs002:/bricks/b004/v<br>
                Brick5: gfs002:/bricks/b005/v<br>
                Brick6: gfs002:/bricks/b006/v<span class=""><br>
                  Options Reconfigured:<br>
                  features.shard-block-size: 128MB<br>
                  features.shard: enable<br>
                  cluster.server-quorum-type: server<br>
                  cluster.quorum-type: auto<br>
                  network.remote-dio: enable<br>
                  cluster.eager-lock: enable<br>
                  performance.stat-prefetch: off<br>
                  performance.io-cache: off<br>
                  performance.read-ahead: off<br>
                  performance.quick-read: off<br>
                  performance.readdir-ahead: on<br>
                  <br>
                  <br>
                </span> same error.<br>
                and still mounting using glusterfs will work just fine.<br>
                <br>
                <div><font color="#3366ff"><font color="#000000">Respectfully<b><br>
                      </b><b>Mahdi A. Mahdi</b></font></font><font
                    color="#3366ff"><br>
                  </font><br>
                  <br>
                </div>
                <div>
                  <div class="h5">
                    <div>On 03/15/2016 11:04 AM, Krutika Dhananjay
                      wrote:<br>
                    </div>
                    <blockquote type="cite">
                      <div dir="ltr">
                        <div>
                          <div>OK but what if you use it with
                            replication? Do you still see the error? I
                            think not.<br>
                          </div>
                          Could you give it a try and tell me what you
                          find?<br>
                          <br>
                        </div>
                        -Krutika<br>
                      </div>
                      <div class="gmail_extra"><br>
                        <div class="gmail_quote">On Tue, Mar 15, 2016 at
                          1:23 PM, Mahdi Adnan <span dir="ltr">&lt;<a
                              moz-do-not-send="true"
                              class="moz-txt-link-abbreviated"
                              href="mailto:mahdi.adnan@earthlinktele.com"><a class="moz-txt-link-abbreviated" href="mailto:mahdi.adnan@earthlinktele.com">mahdi.adnan@earthlinktele.com</a></a>&gt;</span>
                          wrote:<br>
                          <blockquote class="gmail_quote"
                            style="margin:0 0 0 .8ex;border-left:1px
                            #ccc solid;padding-left:1ex">
                            <div text="#000000" bgcolor="#FFFFFF"> Hi,<br>
                              <br>
                              I have created the following volume;<br>
                              <br>
                              Volume Name: v<br>
                              Type: Distribute<br>
                              Volume ID:
                              90de6430-7f83-4eda-a98f-ad1fabcf1043<br>
                              Status: Started<br>
                              Number of Bricks: 3<br>
                              Transport-type: tcp<br>
                              Bricks:<br>
                              Brick1: gfs001:/bricks/b001/v<br>
                              Brick2: gfs001:/bricks/b002/v<br>
                              Brick3: gfs001:/bricks/b003/v<br>
                              Options Reconfigured:<br>
                              features.shard-block-size: 128MB<span><br>
                                features.shard: enable<br>
                                cluster.server-quorum-type: server<br>
                                cluster.quorum-type: auto<br>
                                network.remote-dio: enable<br>
                                cluster.eager-lock: enable<br>
                                performance.stat-prefetch: off<br>
                                performance.io-cache: off<br>
                                performance.read-ahead: off<br>
                                performance.quick-read: off<br>
                              </span> performance.readdir-ahead: on<br>
                              <br>
                              and after mounting it in ESXi and trying
                              to clone a VM to it, i got the same error.<br>
                              <br>
                              <br>
                              <div><font color="#3366ff"><font
                                    color="#000000">Respectfully<b><br>
                                    </b><b>Mahdi A. Mahdi</b></font></font><font
                                  color="#3366ff"><br>
                                </font><br>
                                <br>
                              </div>
                              <div>
                                <div>
                                  <div>On 03/15/2016 10:44 AM, Krutika
                                    Dhananjay wrote:<br>
                                  </div>
                                  <blockquote type="cite">
                                    <div dir="ltr">
                                      <div>
                                        <div>
                                          <div>
                                            <div>
                                              <div>
                                                <div>
                                                  <div>Hi,<br>
                                                    <br>
                                                  </div>
                                                  Do not use sharding
                                                  and stripe together in
                                                  the same volume
                                                  because<br>
                                                </div>
                                                a) It is not recommended
                                                and there is no point in
                                                using both. Using
                                                sharding alone on your
                                                volume should work fine.<br>
                                              </div>
                                              b) Nobody tested it.<br>
                                            </div>
                                            c) Like Niels said, stripe
                                            feature is virtually
                                            deprecated.<br>
                                            <br>
                                          </div>
                                          I would suggest that you
                                          create an nx3 volume where n
                                          is the number of distribute
                                          subvols you prefer, enable
                                          group virt options on it, and
                                          enable sharding on it,<br>
                                          set the shard-block-size that
                                          you feel appropriate and then
                                          just start off with VM image
                                          creation etc.<br>
                                        </div>
                                        If you run into any issues even
                                        after you do this, let us know
                                        and we'll help you out.<br>
                                        <br>
                                      </div>
                                      -Krutika  <br>
                                    </div>
                                    <div class="gmail_extra"><br>
                                      <div class="gmail_quote">On Tue,
                                        Mar 15, 2016 at 1:07 PM, Mahdi
                                        Adnan <span dir="ltr">&lt;<a
                                            moz-do-not-send="true"
                                            class="moz-txt-link-abbreviated"
href="mailto:mahdi.adnan@earthlinktele.com"><a class="moz-txt-link-abbreviated" href="mailto:mahdi.adnan@earthlinktele.com">mahdi.adnan@earthlinktele.com</a></a>&gt;</span>
                                        wrote:<br>
                                        <blockquote class="gmail_quote"
                                          style="margin:0 0 0
                                          .8ex;border-left:1px #ccc
                                          solid;padding-left:1ex">
                                          <div text="#000000"
                                            bgcolor="#FFFFFF"> Thanks
                                            Krutika,<br>
                                            <br>
                                            I have deleted the volume
                                            and created a new one.<br>
                                            I found that it may be an
                                            issue with the NFS itself, i
                                            have created a new striped
                                            volume and enabled sharding
                                            and mounted it via glusterfs
                                            and it worked just fine, if
                                            i mount it with nfs it will
                                            fail and gives me the same
                                            errors.<br>
                                            <br>
                                            <div><font color="#3366ff"><font
                                                  color="#000000">Respectfully<b><br>
                                                  </b><b>Mahdi A. Mahdi</b></font></font><font
                                                color="#3366ff"><br>
                                                <br>
                                              </font></div>
                                            <div>
                                              <div>
                                                <div>On 03/15/2016 06:24
                                                  AM, Krutika Dhananjay
                                                  wrote:<br>
                                                </div>
                                                <blockquote type="cite">
                                                  <div dir="ltr">
                                                    <div>
                                                      <div>
                                                        <div>
                                                          <div>
                                                          <div>
                                                          <div>
                                                          <div>Hi,<br>
                                                          <br>
                                                          </div>
                                                          So could you
                                                          share the
                                                          xattrs
                                                          associated
                                                          with the file
                                                          at
                                                          &lt;BRICK_PATH&gt;/.glusterfs/c3/e8/c3e88cc1-7e0a-4d46-9685-2d12131a5e1c<br>
                                                          </div>
                                                          <br>
                                                          </div>
                                                          Here's what
                                                          you need to
                                                          execute:<br>
                                                          <br>
                                                          </div>
                                                          # getfattr -d
                                                          -m . -e hex
                                                          /mnt/b1/v/.glusterfs/c3/e8/c3e88cc1-7e0a-4d46-9685-2d12131a5e1c     
                                                          on the first
                                                          node and<br>
                                                          <br>
                                                          # getfattr -d
                                                          -m . -e hex
                                                          /mnt/b2/v/.glusterfs/c3/e8/c3e88cc1-7e0a-4d46-9685-2d12131a5e1c     
                                                          on the second.<br>
                                                        </div>
                                                        <br>
                                                        <br>
                                                      </div>
                                                      Also, it is
                                                      normally advised
                                                      to use a replica 3
                                                      volume as opposed
                                                      to replica 2
                                                      volume to guard
                                                      against
                                                      split-brains.<br>
                                                      <br>
                                                    </div>
                                                    -Krutika<br>
                                                  </div>
                                                  <div
                                                    class="gmail_extra"><br>
                                                    <div
                                                      class="gmail_quote">On

                                                      Mon, Mar 14, 2016
                                                      at 3:17 PM, Mahdi
                                                      Adnan <span
                                                        dir="ltr">&lt;<a
moz-do-not-send="true" class="moz-txt-link-abbreviated"
                                                          href="mailto:mahdi.adnan@earthlinktele.com"><a class="moz-txt-link-abbreviated" href="mailto:mahdi.adnan@earthlinktele.com">mahdi.adnan@earthlinktele.com</a></a>&gt;</span>
                                                      wrote:<br>
                                                      <blockquote
                                                        class="gmail_quote"
                                                        style="margin:0
                                                        0 0
                                                        .8ex;border-left:1px
                                                        #ccc
                                                        solid;padding-left:1ex">
                                                        <div
                                                          text="#000000"
bgcolor="#FFFFFF"> sorry for serial posting but, i got new logs it might
                                                          help..<br>
                                                          <br>
                                                          the message
                                                          appear during
                                                          the migration;<br>
                                                          <br>
/var/log/glusterfs/nfs.log<br>
                                                          <br>
                                                          <br>
                                                          [2016-03-14
                                                          09:45:04.573765]
                                                          I [MSGID:
                                                          109036]
                                                          [dht-common.c:8043:dht_log_new_layout_for_dir_selfheal]
                                                          0-testv-dht:
                                                          Setting layout
                                                          of /New
                                                          Virtual
                                                          Machine_1 with
                                                          [Subvol_name:
                                                          testv-stripe-0,

                                                          Err: -1 ,
                                                          Start: 0 ,
                                                          Stop:
                                                          4294967295 ,
                                                          Hash: 1 ], <br>
                                                          [2016-03-14
                                                          09:45:04.957499]
                                                          E
                                                          [shard.c:369:shard_modify_size_and_block_count]
                                                          (--&gt;/usr/lib64/glusterfs/3.7.8/xlator/cluster/distribute.so(dht_file_setattr_cbk+0x14f)





                                                          [0x7f27a13c067f]

                                                          --&gt;/usr/lib64/glusterfs/3.7.8/xlator/features/shard.so(shard_common_setattr_cbk+0xcc)





                                                          [0x7f27a116681c]

                                                          --&gt;/usr/lib64/glusterfs/3.7.8/xlator/features/shard.so(shard_modify_size_and_block_count+0xdd)





                                                          [0x7f27a116584d]

                                                          )
                                                          0-testv-shard:
                                                          Failed to get
                                                          trusted.glusterfs.shard.file-size

                                                          for
                                                          c3e88cc1-7e0a-4d46-9685-2d12131a5e1c<br>
                                                          [2016-03-14
                                                          09:45:04.957577]
                                                          W [MSGID:
                                                          112199]
                                                          [nfs3-helpers.c:3418:nfs3_log_common_res]
                                                          0-nfs-nfsv3:
                                                          /New Virtual
                                                          Machine_1/New
                                                          Virtual
                                                          Machine-flat.vmdk
                                                          =&gt; (XID:
                                                          3fec5a26,
                                                          SETATTR: NFS:
                                                          22(Invalid
                                                          argument for
                                                          operation),
                                                          POSIX:
                                                          22(Invalid
                                                          argument))
                                                          [Invalid
                                                          argument]<br>
                                                          [2016-03-14
                                                          09:45:05.079657]
                                                          E [MSGID:
                                                          112069]
                                                          [nfs3.c:3649:nfs3_rmdir_resume]
                                                          0-nfs-nfsv3:
                                                          No such file
                                                          or directory:
                                                          (<a
                                                          moz-do-not-send="true"
href="http://192.168.221.52:826" target="_blank">192.168.221.52:826</a>)
                                                          testv :
                                                          00000000-0000-0000-0000-000000000001<br>
                                                          <br>
                                                          <br>
                                                          <br>
                                                          <div><font
                                                          color="#3366ff"><font
color="#000000">Respectfully<b><br>
                                                          </b><b>Mahdi
                                                          A. Mahd<br>
                                                          <br>
                                                          </b></font></font><font
color="#3366ff"> </font></div>
                                                          <div>
                                                          <div>
                                                          <div>On
                                                          03/14/2016
                                                          11:14 AM,
                                                          Mahdi Adnan
                                                          wrote:<br>
                                                          </div>
                                                          <blockquote
                                                          type="cite">
                                                          So i have
                                                          deployed a new
                                                          server "Cisco
                                                          UCS C220M4"
                                                          and created a
                                                          new volume;<br>
                                                          <br>
                                                          Volume Name:
                                                          testv<br>
                                                          Type: Stripe<br>
                                                          Volume ID:
                                                          55cdac79-fe87-4f1f-90c0-15c9100fe00b<br>
                                                          Status:
                                                          Started<br>
                                                          Number of
                                                          Bricks: 1 x 2
                                                          = 2<br>
                                                          Transport-type:

                                                          tcp<br>
                                                          Bricks:<br>
                                                          Brick1:
                                                          10.70.0.250:/mnt/b1/v<br>
                                                          Brick2:
                                                          10.70.0.250:/mnt/b2/v<br>
                                                          Options
                                                          Reconfigured:<br>
                                                          nfs.disable:
                                                          off<br>
                                                          features.shard-block-size:


                                                          64MB<br>
                                                          features.shard:

                                                          enable<br>
                                                          cluster.server-quorum-type:


                                                          server<br>
                                                          cluster.quorum-type:


                                                          auto<br>
                                                          network.remote-dio:


                                                          enable<br>
                                                          cluster.eager-lock:


                                                          enable<br>
                                                          performance.stat-prefetch:


                                                          off<br>
                                                          performance.io-cache:


                                                          off<br>
                                                          performance.read-ahead:


                                                          off<br>
                                                          performance.quick-read:


                                                          off<br>
                                                          performance.readdir-ahead:


                                                          off<br>
                                                          <br>
                                                          same error ..<br>
                                                          <br>
                                                          can anyone
                                                          share with me
                                                          the info of a
                                                          working
                                                          striped volume
                                                          ? <br>
                                                          <div><br>
                                                          </div>
                                                          <div>On
                                                          03/14/2016
                                                          09:02 AM,
                                                          Mahdi Adnan
                                                          wrote:<br>
                                                          </div>
                                                          <blockquote
                                                          type="cite"> I
                                                          have a pool of
                                                          two bricks in
                                                          the same
                                                          server;<br>
                                                          <br>
                                                          Volume Name: k<br>
                                                          Type: Stripe<br>
                                                          Volume ID:
                                                          1e9281ce-2a8b-44e8-a0c6-e3ebf7416b2b<br>
                                                          Status:
                                                          Started<br>
                                                          Number of
                                                          Bricks: 1 x 2
                                                          = 2<br>
                                                          Transport-type:

                                                          tcp<br>
                                                          Bricks:<br>
                                                          Brick1:
                                                          gfs001:/bricks/t1/k<br>
                                                          Brick2:
                                                          gfs001:/bricks/t2/k<br>
                                                          Options
                                                          Reconfigured:<br>
                                                          features.shard-block-size:


                                                          64MB<br>
                                                          features.shard:

                                                          on<br>
                                                          cluster.server-quorum-type:


                                                          server<br>
                                                          cluster.quorum-type:


                                                          auto<br>
                                                          network.remote-dio:


                                                          enable<br>
                                                          cluster.eager-lock:


                                                          enable<br>
                                                          performance.stat-prefetch:


                                                          off<br>
                                                          performance.io-cache:


                                                          off<br>
                                                          performance.read-ahead:


                                                          off<br>
                                                          performance.quick-read:


                                                          off<br>
                                                          performance.readdir-ahead:


                                                          off<br>
                                                          <br>
                                                          same issue ...<br>
                                                          glusterfs
                                                          3.7.8 built on
                                                          Mar 10 2016
                                                          20:20:45.<br>
                                                          <br>
                                                          <br>
                                                          <div><font
                                                          color="#3366ff"><font
color="#000000">Respectfully<b><br>
                                                          </b><b>Mahdi
                                                          A. Mahdi</b></font></font><font
color="#3366ff"><br>
                                                          <br>
                                                          </font><font
                                                          color="#3366ff"><font
color="#000000">Systems Administrator<br>
                                                          IT. Department<br>
                                                          <a
                                                          moz-do-not-send="true"
href="https://www.facebook.com/earthlinktele" target="_blank">Earthlink
Telecommunications</a></font><br>
                                                          <br>
                                                          <font
                                                          color="#000000">Cell:



                                                          07903316180<br>
                                                          Work: 3352<br>
                                                          Skype: </font><font
color="#000000"><a moz-do-not-send="true"
                                                          href="mailto:mahdi.adnan@outlook.com"
target="_blank">mahdi.adnan@outlook.com</a></font></font></div>
                                                          <div>On
                                                          03/14/2016
                                                          08:11 AM,
                                                          Niels de Vos
                                                          wrote:<br>
                                                          </div>
                                                          <blockquote
                                                          type="cite">
                                                          <pre>On Mon, Mar 14, 2016 at 08:12:27AM +0530, Krutika Dhananjay wrote:
</pre>
                                                          <blockquote
                                                          type="cite">
                                                          <pre>It would be better to use sharding over stripe for your vm use case. It
offers better distribution and utilisation of bricks and better heal
performance.
And it is well tested.
</pre>
                                                          </blockquote>
                                                          <pre>Basically the "striping" feature is deprecated, "sharding" is its
improved replacement. I expect to see "striping" completely dropped in
the next major release.

Niels


</pre>
                                                          <blockquote
                                                          type="cite">
                                                          <pre>Couple of things to note before you do that:
1. Most of the bug fixes in sharding have gone into 3.7.8. So it is advised
that you use 3.7.8 or above.
2. When you enable sharding on a volume, already existing files in the
volume do not get sharded. Only the files that are newly created from the
time sharding is enabled will.
    If you do want to shard the existing files, then you would need to cp
them to a temp name within the volume, and then rename them back to the
original file name.

HTH,
Krutika

On Sun, Mar 13, 2016 at 11:49 PM, Mahdi Adnan &lt;<a moz-do-not-send="true" href="mailto:mahdi.adnan@earthlinktele.com" target="_blank">mahdi.adnan@earthlinktele.com</a>
</pre>
                                                          <blockquote
                                                          type="cite">
                                                          <pre>wrote:
</pre>
                                                          </blockquote>
                                                          <blockquote
                                                          type="cite">
                                                          <pre>I couldn't find anything related to cache in the HBAs.
what logs are useful in my case ? i see only bricks logs which contains
nothing during the failure.

###
[2016-03-13 18:05:19.728614] E [MSGID: 113022] [posix.c:1232:posix_mknod]
0-vmware-posix: mknod on
/bricks/b003/vmware/.shard/17d75e20-16f1-405e-9fa5-99ee7b1bd7f1.511 failed
[File exists]
[2016-03-13 18:07:23.337086] E [MSGID: 113022] [posix.c:1232:posix_mknod]
0-vmware-posix: mknod on
/bricks/b003/vmware/.shard/eef2d538-8eee-4e58-bc88-fbf7dc03b263.4095 failed
[File exists]
[2016-03-13 18:07:55.027600] W [trash.c:1922:trash_rmdir] 0-vmware-trash:
rmdir issued on /.trashcan/, which is not permitted
[2016-03-13 18:07:55.027635] I [MSGID: 115056]
[server-rpc-fops.c:459:server_rmdir_cbk] 0-vmware-server: 41987: RMDIR
/.trashcan/internal_op (00000000-0000-0000-0000-000000000005/internal_op)
==&gt; (Operation not permitted) [Operation not permitted]
[2016-03-13 18:11:34.353441] I [login.c:81:gf_auth] 0-auth/login: allowed
user names: c0c72c37-477a-49a5-a305-3372c1c2f2b4
[2016-03-13 18:11:34.353463] I [MSGID: 115029]
[server-handshake.c:612:server_setvolume] 0-vmware-server: accepted client
from gfs002-2727-2016/03/13-20:17:43:613597-vmware-client-4-0-0 (version:
3.7.8)
[2016-03-13 18:11:34.591139] I [login.c:81:gf_auth] 0-auth/login: allowed
user names: c0c72c37-477a-49a5-a305-3372c1c2f2b4
[2016-03-13 18:11:34.591173] I [MSGID: 115029]
[server-handshake.c:612:server_setvolume] 0-vmware-server: accepted client
from gfs002-2719-2016/03/13-20:17:42:609388-vmware-client-4-0-0 (version:
3.7.8)
###

ESXi just keeps telling me "Cannot clone T: The virtual disk is either
corrupted or not a supported format.
error
3/13/2016 9:06:20 PM
Clone virtual machine
T
VCENTER.LOCAL\Administrator
"

My setup is 2 servers with a floating ip controlled by CTDB and my ESXi
server mount the NFS via the floating ip.





On 03/13/2016 08:40 PM, pkoelle wrote:

</pre>
                                                          <blockquote
                                                          type="cite">
                                                          <pre>Am 13.03.2016 um 18:22 schrieb David Gossage:

</pre>
                                                          <blockquote
                                                          type="cite">
                                                          <pre>On Sun, Mar 13, 2016 at 11:07 AM, Mahdi Adnan &lt;
<a moz-do-not-send="true" href="mailto:mahdi.adnan@earthlinktele.com" target="_blank">mahdi.adnan@earthlinktele.com</a>

</pre>
                                                          <blockquote
                                                          type="cite">
                                                          <pre>wrote:

</pre>
                                                          </blockquote>
                                                          <pre>My HBAs are LSISAS1068E, and the filesystem is XFS.
</pre>
                                                          <blockquote
                                                          type="cite">
                                                          <pre>I tried EXT4 and it did not help.
I have created a stripted volume in one server with two bricks, same
issue.
and i tried a replicated volume with just "sharding enabled" same issue,
as soon as i disable the sharding it works just fine, niether sharding
nor
striping works for me.
i did follow up with some of threads in the mailing list and tried some
of
the fixes that worked with the others, none worked for me. :(


</pre>
                                                          </blockquote>
                                                          <pre>Is it possible the LSI has write-cache enabled?

</pre>
                                                          </blockquote>
                                                          <pre>Why is that relevant? Even the backing filesystem has no idea if there is
a RAID or write cache or whatever. There are blocks and sync(), end of
story.
If you lose power and screw up your recovery OR do funky stuff with SAS
multipathing that might be an issue with a controller cache. AFAIK thats
not what we are talking about.

I'm afraid but unless the OP has some logs from the server, a
reproducible testcase or a backtrace from client or server this isn't
getting us anywhere.

cheers
Paul


</pre>
                                                          <blockquote
                                                          type="cite">
                                                          <pre>On 03/13/2016 06:54 PM, David Gossage wrote:
</pre>
                                                          <blockquote
                                                          type="cite">
                                                          <pre>On Sun, Mar 13, 2016 at 8:16 AM, Mahdi Adnan &lt;
<a moz-do-not-send="true" href="mailto:mahdi.adnan@earthlinktele.com" target="_blank">mahdi.adnan@earthlinktele.com</a>&gt; wrote:

Okay so i have enabled shard in my test volume and it did not help,
</pre>
                                                          <blockquote
                                                          type="cite">
                                                          <pre>stupidly enough, i have enabled it in a production volume
"Distributed-Replicate" and it currpted  half of my VMs.
I have updated Gluster to the latest and nothing seems to be changed in
my situation.
below the info of my volume;


</pre>
                                                          </blockquote>
                                                          <pre>I was pointing at the settings in that email as an example for
corruption
fixing. I wouldn't recommend enabling sharding if you haven't gotten the
base working yet on that cluster. What HBA's are you using and what is
layout of filesystem for bricks?


Number of Bricks: 3 x 2 = 6
</pre>
                                                          <blockquote
                                                          type="cite">
                                                          <pre>Transport-type: tcp
Bricks:
Brick1: gfs001:/bricks/b001/vmware
Brick2: gfs002:/bricks/b004/vmware
Brick3: gfs001:/bricks/b002/vmware
Brick4: gfs002:/bricks/b005/vmware
Brick5: gfs001:/bricks/b003/vmware
Brick6: gfs002:/bricks/b006/vmware
Options Reconfigured:
performance.strict-write-ordering: on
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
performance.stat-prefetch: disable
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
cluster.eager-lock: enable
features.shard-block-size: 16MB
features.shard: on
performance.readdir-ahead: off


On 03/12/2016 08:11 PM, David Gossage wrote:


On Sat, Mar 12, 2016 at 10:21 AM, Mahdi Adnan &lt;
<a moz-do-not-send="true" href="mailto:mahdi.adnan@earthlinktele.com" target="_blank">&lt;mahdi.adnan@earthlinktele.com&gt;</a><a moz-do-not-send="true" href="mailto:mahdi.adnan@earthlinktele.com" target="_blank">mahdi.adnan@earthlinktele.com</a>&gt; wrote:

Both servers have HBA no RAIDs and i can setup a replicated or
</pre>
                                                          <blockquote
                                                          type="cite">
                                                          <pre>dispensers without any issues.
Logs are clean and when i tried to migrate a vm and got the error,
nothing showed up in the logs.
i tried mounting the volume into my laptop and it mounted fine but,
if i
use dd to create a data file it just hang and i cant cancel it, and i
cant
unmount it or anything, i just have to reboot.
The same servers have another volume on other bricks in a distributed
replicas, works fine.
I have even tried the same setup in a virtual environment (created two
vms and install gluster and created a replicated striped) and again
same
thing, data corruption.


</pre>
                                                          </blockquote>
                                                          <pre>I'd look through mail archives for a topic "Shard in Production" I
think
it's called.  The shard portion may not be relevant but it does discuss
certain settings that had to be applied with regards to avoiding
corruption
with VM's.  You may want to try and disable the
performance.readdir-ahead
also.



</pre>
                                                          <blockquote
                                                          type="cite">
                                                          <pre>On 03/12/2016 07:02 PM, David Gossage wrote:



On Sat, Mar 12, 2016 at 9:51 AM, Mahdi Adnan &lt;
<a moz-do-not-send="true" href="mailto:mahdi.adnan@earthlinktele.com" target="_blank">&lt;mahdi.adnan@earthlinktele.com&gt;</a><a moz-do-not-send="true" href="mailto:mahdi.adnan@earthlinktele.com" target="_blank">mahdi.adnan@earthlinktele.com</a>&gt; wrote:

Thanks David,
</pre>
                                                          <blockquote
                                                          type="cite">
                                                          <pre>My settings are all defaults, i have just created the pool and
started
it.
I have set the settings as your recommendation and it seems to be the
same issue;

Type: Striped-Replicate
Volume ID: 44adfd8c-2ed1-4aa5-b256-d12b64f7fc14
Status: Started
Number of Bricks: 1 x 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: gfs001:/bricks/t1/s
Brick2: gfs002:/bricks/t1/s
Brick3: gfs001:/bricks/t2/s
Brick4: gfs002:/bricks/t2/s
Options Reconfigured:
performance.stat-prefetch: off
network.remote-dio: on
cluster.eager-lock: enable
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.readdir-ahead: on


</pre>
                                                          </blockquote>
                                                          <pre>Is their a raid controller perhaps doing any caching?

In the gluster logs any errors being reported during migration
process?
Since they aren't in use yet have you tested making just mirrored
bricks
using different pairings of servers two at a time to see if problem
follows
certain machine or network ports?




</pre>
                                                          <blockquote
                                                          type="cite">
                                                          <pre>On 03/12/2016 03:25 PM, David Gossage wrote:



On Sat, Mar 12, 2016 at 1:55 AM, Mahdi Adnan &lt;
<a moz-do-not-send="true" href="mailto:mahdi.adnan@earthlinktele.com" target="_blank">&lt;mahdi.adnan@earthlinktele.com&gt;</a><a moz-do-not-send="true" href="mailto:mahdi.adnan@earthlinktele.com" target="_blank">mahdi.adnan@earthlinktele.com</a>&gt; wrote:

Dears,
</pre>
                                                          <blockquote
                                                          type="cite">
                                                          <pre>I have created a replicated striped volume with two bricks and two
servers but I can't use it because when I mount it in ESXi and try
to
migrate a VM to it, the data get corrupted.
Is any one have any idea why is this happening ?

Dell 2950 x2
Seagate 15k 600GB
CentOS 7.2
Gluster 3.7.8

Appreciate your help.


</pre>
                                                          </blockquote>
                                                          <pre>Most reports of this I have seen end up being settings related.  Post
gluster volume info. Below is what I have seen as most common
recommended
settings.
I'd hazard a guess you may have some the read ahead cache or prefetch
on.

quick-read=off
read-ahead=off
io-cache=off
stat-prefetch=off
eager-lock=enable
remote-dio=on


</pre>
                                                          <blockquote
                                                          type="cite">
                                                          <pre>Mahdi Adnan
System Admin


_______________________________________________
Gluster-users mailing list
<a moz-do-not-send="true" href="mailto:Gluster-users@gluster.org" target="_blank">&lt;Gluster-users@gluster.org&gt;</a><a moz-do-not-send="true" href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>
<a moz-do-not-send="true" href="http://www.gluster.org/mailman/listinfo/gluster-users" target="_blank">&lt;http://www.gluster.org/mailman/listinfo/gluster-users&gt;</a>
<a moz-do-not-send="true" href="http://www.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a>


</pre>
                                                          </blockquote>
                                                          </blockquote>
                                                          </blockquote>
                                                          </blockquote>
                                                          </blockquote>
                                                          <pre>_______________________________________________
Gluster-users mailing list
<a moz-do-not-send="true" href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>
<a moz-do-not-send="true" href="http://www.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a>


</pre>
                                                          </blockquote>
                                                          <pre>_______________________________________________
Gluster-users mailing list
<a moz-do-not-send="true" href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>
<a moz-do-not-send="true" href="http://www.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a>

</pre>
                                                          </blockquote>
                                                          <pre>_______________________________________________
Gluster-users mailing list
<a moz-do-not-send="true" href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>
<a moz-do-not-send="true" href="http://www.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a>

</pre>
                                                          </blockquote>
                                                          </blockquote>
                                                          <blockquote
                                                          type="cite">
                                                          <pre>_______________________________________________
Gluster-users mailing list
<a moz-do-not-send="true" href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>
<a moz-do-not-send="true" href="http://www.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a>
</pre>
                                                          </blockquote>
                                                          </blockquote>
                                                          <br>
                                                          <br>
                                                          <fieldset></fieldset>
                                                          <br>
                                                          <pre>_______________________________________________
Gluster-users mailing list
<a moz-do-not-send="true" href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>
<a moz-do-not-send="true" href="http://www.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a></pre>
                                                          </blockquote>
                                                          <br>
                                                          <br>
                                                          <fieldset></fieldset>
                                                          <br>
                                                          <pre>_______________________________________________
Gluster-users mailing list
<a moz-do-not-send="true" href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>
<a moz-do-not-send="true" href="http://www.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a></pre>
                                                          </blockquote>
                                                          <br>
                                                          </div>
                                                          </div>
                                                        </div>
                                                        <br>
_______________________________________________<br>
                                                        Gluster-users
                                                        mailing list<br>
                                                        <a
                                                          moz-do-not-send="true"
class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org"><a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a></a><br>
                                                        <a
                                                          moz-do-not-send="true"
class="moz-txt-link-freetext"
                                                          href="http://www.gluster.org/mailman/listinfo/gluster-users"><a class="moz-txt-link-freetext" href="http://www.gluster.org/mailman/listinfo/gluster-users">http://www.gluster.org/mailman/listinfo/gluster-users</a></a><br>
                                                      </blockquote>
                                                    </div>
                                                    <br>
                                                  </div>
                                                </blockquote>
                                                <br>
                                              </div>
                                            </div>
                                          </div>
                                        </blockquote>
                                      </div>
                                      <br>
                                    </div>
                                  </blockquote>
                                  <br>
                                </div>
                              </div>
                            </div>
                          </blockquote>
                        </div>
                        <br>
                      </div>
                    </blockquote>
                    <br>
                  </div>
                </div>
              </div>
            </blockquote>
          </div>
          <br>
        </div>
      </blockquote>
      <br>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="http://www.gluster.org/mailman/listinfo/gluster-users">http://www.gluster.org/mailman/listinfo/gluster-users</a></pre>
    </blockquote>
    <br>
  </body>
</html>