<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=windows-1252">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <br>
    <div class="moz-forward-container"><br>
      <meta http-equiv="Content-Type" content="text/html;
        charset=windows-1252">
      <style type="text/css" style="display:none;"><!-- P {margin-top:0;margin-bottom:0;} --></style>
      <div id="divtagdefaultwrapper"
style="font-size:12pt;color:#000000;background-color:#FFFFFF;font-family:Calibri,Arial,Helvetica,sans-serif;">
        <p>we have a machine that previously had centos 6.8 and gluster
          3.7.10-1 with 2 bricks.  The machine had to be rebuilt with
          centos 6.8 and the 2 bricks were not formatted.   Gluster
          3.7.11 was installed with the new OS, and we can start the
          service, create the volume with the 2 bricks and mount the
          gluster share. <br>
        </p>
        <p><br>
        </p>
        <p><br>
        </p>
        <p>The folder name (<span>gluster-data</span>) in it is correct,
          but we are getting error:</p>
        <div> ls /data<br>
          ls: cannot access /data/gluster-data: No such file or
          directory<br>
          gluster-data<br>
          <br>
          The data and directories are still there (i.e. we can still
          see them looking at the underlying file systems) but gluster
          isn't serving them.<br>
          <br>
          Looking in the log file for each brick we sea the same errors:<br>
          <div>[2016-06-03 04:30:07.494068] I [MSGID: 100030]
            [glusterfsd.c:2332:main] 0-/usr/sbin/glusterfsd: Started
            running /usr/sbin/glusterfsd version 3.7.11 (args:
            /usr/sbin/glusterfsd -s mseas-data2 --volfile-id
            data-volume.mseas-data2.mnt-brick1 -p
            /var/lib/glusterd/vols/data-volume/run/mseas-data2-mnt-brick1.pid
            -S /var/run/gluster/aa572e87933c930cb53983de35bdccbe.socket
            --brick-name /mnt/brick1 -l
            /var/log/glusterfs/bricks/mnt-brick1.log --xlator-option
            *-posix.glusterd-uuid=c1110fd9-cb99-4ca1-b18a-536a122d67ef
            --brick-port 49152 --xlator-option
            data-volume-server.listen-port=49152)<br>
            [2016-06-03 04:30:07.510671] I [MSGID: 101190]
            [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll:
            Started thread with index 1<br>
            [2016-06-03 04:30:07.519040] I
            [graph.c:269:gf_add_cmdline_options] 0-data-volume-server:
            adding option 'listen-port' for volume 'data-volume-server'
            with value '49152'<br>
            [2016-06-03 04:30:07.519089] I
            [graph.c:269:gf_add_cmdline_options] 0-data-volume-posix:
            adding option 'glusterd-uuid' for volume 'data-volume-posix'
            with value 'c1110fd9-cb99-4ca1-b18a-536a122d67ef'<br>
            [2016-06-03 04:30:07.519479] I [MSGID: 101190]
            [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll:
            Started thread with index 2<br>
            [2016-06-03 04:30:07.519486] I [MSGID: 115034]
            [server.c:403:_check_for_auth_option] 0-/mnt/brick1: skip
            format check for non-addr auth option
            auth.login./mnt/brick1.allow<br>
            [2016-06-03 04:30:07.519537] I [MSGID: 115034]
            [server.c:403:_check_for_auth_option] 0-/mnt/brick1: skip
            format check for non-addr auth option
            auth.login.0016d59c-9691-4bb2-bc44-b1d8b19dd230.password<br>
            [2016-06-03 04:30:07.520926] I
            [rpcsvc.c:2215:rpcsvc_set_outstanding_rpc_limit]
            0-rpc-service: Configured rpc.outstanding-rpc-limit with
            value 64<br>
            [2016-06-03 04:30:07.521003] W [MSGID: 101002]
            [options.c:957:xl_opt_validate] 0-data-volume-server: option
            'listen-port' is deprecated, preferred is
            'transport.socket.listen-port', continuing with correction<br>
            [2016-06-03 04:30:07.523056] I [MSGID: 121050]
            [ctr-helper.c:259:extract_ctr_options] 0-gfdbdatastore: CTR
            Xlator is disabled.<br>
            [2016-06-03 04:30:07.523077] W [MSGID: 101105]
            [gfdb_sqlite3.h:239:gfdb_set_sql_params]
            0-data-volume-changetimerecorder: Failed to retrieve
            sql-db-pagesize from params.Assigning default value: 4096<br>
            [2016-06-03 04:30:07.523086] W [MSGID: 101105]
            [gfdb_sqlite3.h:239:gfdb_set_sql_params]
            0-data-volume-changetimerecorder: Failed to retrieve
            sql-db-journalmode from params.Assigning default value: wal<br>
            [2016-06-03 04:30:07.523095] W [MSGID: 101105]
            [gfdb_sqlite3.h:239:gfdb_set_sql_params]
            0-data-volume-changetimerecorder: Failed to retrieve
            sql-db-sync from params.Assigning default value: off<br>
            [2016-06-03 04:30:07.523102] W [MSGID: 101105]
            [gfdb_sqlite3.h:239:gfdb_set_sql_params]
            0-data-volume-changetimerecorder: Failed to retrieve
            sql-db-autovacuum from params.Assigning default value: none<br>
            [2016-06-03 04:30:07.523280] I [trash.c:2369:init]
            0-data-volume-trash: no option specified for 'eliminate',
            using NULL<br>
            [2016-06-03 04:30:07.523910] W
            [graph.c:357:_log_if_unknown_option] 0-data-volume-server:
            option 'rpc-auth.auth-glusterfs' is not recognized<br>
            [2016-06-03 04:30:07.523937] W
            [graph.c:357:_log_if_unknown_option] 0-data-volume-server:
            option 'rpc-auth.auth-unix' is not recognized<br>
            [2016-06-03 04:30:07.523955] W
            [graph.c:357:_log_if_unknown_option] 0-data-volume-server:
            option 'rpc-auth.auth-null' is not recognized<br>
            [2016-06-03 04:30:07.523989] W
            [graph.c:357:_log_if_unknown_option] 0-data-volume-quota:
            option 'timeout' is not recognized<br>
            [2016-06-03 04:30:07.524031] W
            [graph.c:357:_log_if_unknown_option] 0-data-volume-trash:
            option 'brick-path' is not recognized<br>
            [2016-06-03 04:30:07.529994] W [MSGID: 113036]
            [posix.c:2211:posix_rename] 0-data-volume-posix: found
            directory at /mnt/brick1/.trashcan/ while expecting ENOENT
            [File exists]<br>
            Final graph:<br>
+------------------------------------------------------------------------------+<br>
              1: volume data-volume-posix<br>
              2:     type storage/posix<br>
              3:     option glusterd-uuid
            c1110fd9-cb99-4ca1-b18a-536a122d67ef<br>
              4:     option directory /mnt/brick1<br>
              5:     option volume-id
            c54b2a60-ffdc-4d82-9db1-890e41002e28<br>
              6: end-volume<br>
              7:<br>
              8: volume data-volume-trash<br>
              9:     type features/trash<br>
             10:     option trash-dir .trashcan<br>
             11:     option brick-path /mnt/brick1<br>
             12:     option trash-internal-op off<br>
             13:     subvolumes data-volume-posix<br>
             14: end-volume<br>
             15:<br>
             16: volume data-volume-changetimerecorder<br>
             17:     type features/changetimerecorder<br>
             18:     option db-type sqlite3<br>
             19:     option hot-brick off<br>
             20:     option db-name brick1.db<br>
             21:     option db-path /mnt/brick1/.glusterfs/<br>
             22:     option record-exit off<br>
             23:     option ctr_link_consistency off<br>
             24:     option ctr_lookupheal_link_timeout 300<br>
             25:     option ctr_lookupheal_inode_timeout 300<br>
             26:     option record-entry on<br>
             27:     option ctr-enabled off<br>
             28:     option record-counters off<br>
             29:     option ctr-record-metadata-heat off<br>
             30:     option sql-db-cachesize 1000<br>
             31:     option sql-db-wal-autocheckpoint 1000<br>
             32:     subvolumes data-volume-trash<br>
             33: end-volume<br>
             34:<br>
             35: volume data-volume-changelog<br>
             36:     type features/changelog<br>
             37:     option changelog-brick /mnt/brick1<br>
             38:     option changelog-dir
            /mnt/brick1/.glusterfs/changelogs<br>
             39:     option changelog-barrier-timeout 120<br>
             40:     subvolumes data-volume-changetimerecorder<br>
             41: end-volume<br>
             42:<br>
             43: volume data-volume-bitrot-stub<br>
             44:     type features/bitrot-stub<br>
             45:     option export /mnt/brick1<br>
             46:     subvolumes data-volume-changelog<br>
             47: end-volume<br>
             48:<br>
             49: volume data-volume-access-control<br>
             50:     type features/access-control<br>
             51:     subvolumes data-volume-bitrot-stub<br>
             52: end-volume<br>
             53:<br>
             54: volume data-volume-locks<br>
             55:     type features/locks<br>
             56:     subvolumes data-volume-access-control<br>
             57: end-volume<br>
             58:<br>
             59: volume data-volume-upcall<br>
             60:     type features/upcall<br>
             61:     option cache-invalidation off<br>
             62:     subvolumes data-volume-locks<br>
             63: end-volume<br>
             64:<br>
             65: volume data-volume-io-threads<br>
             66:     type performance/io-threads<br>
             67:     subvolumes data-volume-upcall<br>
             68: end-volume<br>
             69:<br>
             70: volume data-volume-marker<br>
             71:     type features/marker<br>
             72:     option volume-uuid
            c54b2a60-ffdc-4d82-9db1-890e41002e28<br>
             73:     option timestamp-file
            /var/lib/glusterd/vols/data-volume/marker.tstamp<br>
             74:     option quota-version 0<br>
             75:     option xtime off<br>
             76:     option gsync-force-xtime off<br>
             77:     option quota off<br>
             78:     option inode-quota off<br>
             79:     subvolumes data-volume-io-threads<br>
             80: end-volume<br>
             81:<br>
             82: volume data-volume-barrier<br>
             83:     type features/barrier<br>
             84:     option barrier disable<br>
             85:     option barrier-timeout 120<br>
             86:     subvolumes data-volume-marker<br>
             87: end-volume<br>
             88:<br>
             89: volume data-volume-index<br>
             90:     type features/index<br>
             91:     option index-base /mnt/brick1/.glusterfs/indices<br>
             92:     subvolumes data-volume-barrier<br>
             93: end-volume<br>
             94:<br>
             95: volume data-volume-quota<br>
             96:     type features/quota<br>
             97:     option volume-uuid data-volume<br>
             98:     option server-quota off<br>
             99:     option timeout 0<br>
            100:     option deem-statfs off<br>
            101:     subvolumes data-volume-index<br>
            102: end-volume<br>
            103:<br>
            104: volume data-volume-worm<br>
            105:     type features/worm<br>
            106:     option worm off<br>
            107:     subvolumes data-volume-quota<br>
            108: end-volume<br>
            109:<br>
            110: volume data-volume-read-only<br>
            111:     type features/read-only<br>
            112:     option read-only off<br>
            113:     subvolumes data-volume-worm<br>
            114: end-volume<br>
            115:<br>
            116: volume /mnt/brick1<br>
            117:     type debug/io-stats<br>
            118:     option log-level INFO<br>
            119:     option latency-measurement off<br>
            120:     option count-fop-hits off<br>
            121:     subvolumes data-volume-read-only<br>
            122: end-volume<br>
            123:<br>
            124: volume data-volume-server<br>
            125:     type protocol/server<br>
            126:     option transport.socket.listen-port 49152<br>
            127:     option rpc-auth.auth-glusterfs on<br>
            128:     option rpc-auth.auth-unix on<br>
            129:     option rpc-auth.auth-null on<br>
            130:     option rpc-auth-allow-insecure on<br>
            131:     option transport-type tcp<br>
            132:     option auth.login./mnt/brick1.allow
            0016d59c-9691-4bb2-bc44-b1d8b19dd230<br>
            133:     option
            auth.login.0016d59c-9691-4bb2-bc44-b1d8b19dd230.password
            b021dbcf-e114-4c23-ad9f-968a2d93dd61<br>
            134:     option auth.addr./mnt/brick1.allow *<br>
            135:     subvolumes /mnt/brick1<br>
            136: end-volume<br>
            137:<br>
+------------------------------------------------------------------------------+<br>
            [2016-06-03 04:30:07.583590] I [login.c:81:gf_auth]
            0-auth/login: allowed user names:
            0016d59c-9691-4bb2-bc44-b1d8b19dd230<br>
            [2016-06-03 04:30:07.583640] I [MSGID: 115029]
            [server-handshake.c:690:server_setvolume]
            0-data-volume-server: accepted client from
            mseas-data2-2383-2016/06/03-04:30:07:127671-data-volume-client-0-0-0
            (version: 3.7.11)<br>
            [2016-06-03 04:30:40.124584] I [login.c:81:gf_auth]
            0-auth/login: allowed user names:
            0016d59c-9691-4bb2-bc44-b1d8b19dd230<br>
            [2016-06-03 04:30:40.124628] I [MSGID: 115029]
            [server-handshake.c:690:server_setvolume]
            0-data-volume-server: accepted client from
            mseas-data2-2500-2016/06/03-04:30:40:46064-data-volume-client-0-0-0
            (version: 3.7.11)<br>
            [2016-06-03 04:30:43.265342] W [MSGID: 101182]
            [inode.c:174:__foreach_ancestor_dentry]
            0-data-volume-server: per dentry fn returned 1<br>
            [2016-06-03 04:30:43.265393] C [MSGID: 101184]
            [inode.c:228:__is_dentry_cyclic] 0-/mnt/brick1/inode:
            detected cyclic loop formation during inode linkage. inode
            (00000000-0000-0000-0000-000000000001) linking under itself
            as gluster-data<br>
            [2016-06-03 04:30:43.269197] W [MSGID: 101182]
            [inode.c:174:__foreach_ancestor_dentry]
            0-data-volume-server: per dentry fn returned 1<br>
            [2016-06-03 04:30:43.269241] C [MSGID: 101184]
            [inode.c:228:__is_dentry_cyclic] 0-/mnt/brick1/inode:
            detected cyclic loop formation during inode linkage. inode
            (00000000-0000-0000-0000-000000000001) linking under itself
            as gluster-data<br>
            [2016-06-03 04:30:43.270689] W [MSGID: 101182]
            [inode.c:174:__foreach_ancestor_dentry]
            0-data-volume-server: per dentry fn returned 1<br>
            [2016-06-03 04:30:43.270733] C [MSGID: 101184]
            [inode.c:228:__is_dentry_cyclic] 0-/mnt/brick1/inode:
            detected cyclic loop formation during inode linkage. inode
            (00000000-0000-0000-0000-000000000001) linking under itself
            as gluster-data<br>
          </div>
          <br>
          <br>
          This is a distributed volume, not replicated. Can we delete
          the gluster volume, remove the .glusterfs folders from each
          brick and recreate the volume? Will it re-index the files on
          both bricks?<br>
          <br>
          Note:<br>
          From the last lines of log file,<br>
          there is a soft link at /mnt/brick1/.glusterfs/00/00/<span>00000000-0000-0000-0000-000000000001</span>
          &gt;&gt;&gt;&gt; ../../..<br>
          <br>
          We have tried removing the link and restarting the service, no
          change in behavior. It replaces/rebuilds the link on service
          startup.
          <br>
          <div><br>
          </div>
          Any advice you can give will be appreciated.<br>
          <br>
          Thanks<br>
          <pre class="moz-signature" cols="72">-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley                          Email:  <a class="moz-txt-link-abbreviated" href="mailto:phaley@mit.edu">phaley@mit.edu</a>
Center for Ocean Engineering       Phone:  (617) 253-6824
Dept. of Mechanical Engineering    Fax:    (617) 253-8125
MIT, Room 5-213                    <a class="moz-txt-link-freetext" href="http://web.mit.edu/phaley/www/">http://web.mit.edu/phaley/www/</a>
77 Massachusetts Avenue
Cambridge, MA  02139-4301
</pre>
        </div>
        <br>
      </div>
      <br>
    </div>
    <br>
  </body>
</html>