<div dir="ltr">Hi all,<div>Any idea regarding the log outputs ?</div><div>What are the ACL to set on bricks directory or gluster brick root ?</div></div><div class="gmail_extra"><br><div class="gmail_quote">2016-12-28 11:25 GMT+01:00 vincent gromakowski <span dir="ltr">&lt;<a href="mailto:vincent.gromakowski@gmail.com" target="_blank">vincent.gromakowski@gmail.com</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi,<div>Please find below the outputs. I previse that I can read and write to the volume but only with &quot;sudo&quot; or root account whatever the ACL or the ownership I set on fuse directories (even 777) </div><div><br><div><div><i>&gt;sudo gluster peer status</i></div><div><i>Number of Peers: 3</i></div><div><i><br></i></div><div><i>Hostname: bd-reactive-worker-4</i></div><div><i>Uuid: 434a7ee0-9c83-47ce-9a02-<wbr>7c89e2e94ce0</i></div><div><i>State: Peer in Cluster (Connected)</i></div><div><i><br></i></div><div><i>Hostname: bd-reactive-worker-2</i></div><div><i>Uuid: 7f76389c-3f78-4cac-8fd8-<wbr>56f0a9bff47a</i></div><div><i>State: Peer in Cluster (Connected)</i></div><div><i><br></i></div><div><i>Hostname: bd-reactive-worker-3</i></div><div><i>Uuid: e412cae9-6ecd-49cf-be63-<wbr>c46d3e537c83</i></div><div><i>State: Peer in Cluster (Connected)</i></div></div><div><i><br></i></div><div><i><br></i></div><div><i>&gt;sudo gluster volume status</i></div><div><i><div>Status of volume: reactive_small</div><div>Gluster process                             TCP Port  RDMA Port  Online  Pid</div><div>------------------------------<wbr>------------------------------<wbr>------------------</div><div>Brick bd-reactive-worker-1:/srv/<wbr>gluster/dat</div><div>a/small/brick1                              49155     0          Y       31517</div><div>Brick bd-reactive-worker-2:/srv/<wbr>gluster/dat</div><div>a/small/brick1                              49155     0          Y       1147</div><div>Brick bd-reactive-worker-3:/srv/<wbr>gluster/dat</div><div>a/small/brick1                              49155     0          Y       32455</div><div>Brick bd-reactive-worker-4:/srv/<wbr>gluster/dat</div><div>a/small/brick1                              49155     0          Y       675</div><div>Brick bd-reactive-worker-1:/srv/<wbr>gluster/dat</div><div>a/small/brick2                              49156     0          Y       31536</div><div>Brick bd-reactive-worker-2:/srv/<wbr>gluster/dat</div><div>a/small/brick2                              49156     0          Y       1167</div><div>Brick bd-reactive-worker-3:/srv/<wbr>gluster/dat</div><div>a/small/brick2                              49156     0          Y       32474</div><div>Brick bd-reactive-worker-4:/srv/<wbr>gluster/dat</div><div>a/small/brick2                              49156     0          Y       696</div><div>Brick bd-reactive-worker-1:/srv/<wbr>gluster/dat</div><div>a/small/brick3                              49157     0          Y       31555</div><div>Brick bd-reactive-worker-2:/srv/<wbr>gluster/dat</div><div>a/small/brick3                              49157     0          Y       1190</div><div>Brick bd-reactive-worker-3:/srv/<wbr>gluster/dat</div><div>a/small/brick3                              49157     0          Y       32493</div><div>Brick bd-reactive-worker-4:/srv/<wbr>gluster/dat</div><div>a/small/brick3                              49157     0          Y       715</div><div>Self-heal Daemon on localhost               N/A       N/A        Y       31575</div><div>Self-heal Daemon on bd-reactive-worker-4    N/A       N/A        Y       736</div><div>Self-heal Daemon on bd-reactive-worker-3    N/A       N/A        Y       32518</div><div>Self-heal Daemon on bd-reactive-worker-2    N/A       N/A        Y       1227</div><div><br></div><div>Task Status of Volume reactive_small</div><div>------------------------------<wbr>------------------------------<wbr>------------------</div><div>There are no active volume tasks</div><div><br></div></i></div><div><br></div></div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">2016-12-28 11:11 GMT+01:00 knarra <span dir="ltr">&lt;<a href="mailto:knarra@redhat.com" target="_blank">knarra@redhat.com</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
  
    
  
  <div bgcolor="#FFFFFF" text="#000000"><span>
    <div class="m_-9100247780270237154m_8455932154570319364moz-cite-prefix">On 12/28/2016 02:42 PM, vincent
      gromakowski wrote:<br>
    </div>
    <blockquote type="cite">
      <div dir="ltr">Hi,
        <div>Can someone help me solve this issue ? I am really stuck on
          it and I don&#39;t find any workaround...</div>
        <div>Thanks a lot.</div>
        <div><br>
        </div>
        <div>V</div>
      </div>
    </blockquote></span>
    Hi,<br>
        <br>
        What does gluster volume status show? I think it is because of
    quorum you are not able to read / write to and from the volume. Can
    you check if all your bricks are online and can you paste the output
    of your gluster peer status? In the glusterd.log i see that &quot;<i>Peer
      &lt;bd-reactive-worker-1&gt; (&lt;59500674-750f-4e16-aeea-4a99<wbr>fd67218a&gt;),
      in state &lt;Peer in Cluster&gt;, has disconnected from glusterd.&quot;
    </i><br>
    <br>
    Thanks<br>
    kasturi.<br>
    <blockquote type="cite"><div><div class="m_-9100247780270237154h5">
      <div class="gmail_extra"><br>
        <div class="gmail_quote">2016-12-26 15:02 GMT+01:00 vincent
          gromakowski <span dir="ltr">&lt;<a href="mailto:vincent.gromakowski@gmail.com" target="_blank">vincent.gromakowski@gmail.com</a><wbr>&gt;</span>:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
            <div dir="ltr">Hi all,
              <div>I am currently setting a gluster volume on 4 Centos
                7.2 nodes. Everything seems to be OK from the volume
                creation to the fuse mounting but after that I can&#39;t
                access data (read or write) without a sudo even if I set
                777 permissions.</div>
              <div>I have checked that permissions on underlying FS (an
                XFS volume) are OK so I assume the problem is in Gluster
                but I can&#39;t find where.</div>
              <div>I am using ansible to deploy gluster, create volumes
                and mount fuse endpoint.</div>
              <div>Please find below some information:</div>
              <div><br>
              </div>
              <div>The line in /etc/fstab for mounting the raw device</div>
              <div><i>LABEL=/gluster /srv/gluster/data xfs defaults 0 0<br>
                </i></div>
              <div><br>
              </div>
              <div>The line in /etc/fstab for mounting the fuse endpoint</div>
              <div><i>bd-reactive-worker-2:/reactive<wbr>_small
                  /srv/data/small glusterfs defaults,_netdev 0 0</i><br>
              </div>
              <div><i><br>
                </i></div>
              <div><i>&gt;sudo gluster volume info</i></div>
              <div><i>
                  <div>Volume Name: reactive_small</div>
                  <div>Type: Distributed-Replicate</div>
                  <div>Volume ID: f0abede2-eab3-4a0b-8271-ffd6f3<wbr>c83eb6</div>
                  <div>Status: Started</div>
                  <div>Snapshot Count: 0</div>
                  <div>Number of Bricks: 4 x 3 = 12</div>
                  <div>Transport-type: tcp</div>
                  <div>Bricks:</div>
                  <div>Brick1: bd-reactive-worker-1:/srv/glus<wbr>ter/data/small/brick1</div>
                  <div>Brick2: bd-reactive-worker-2:/srv/glus<wbr>ter/data/small/brick1</div>
                  <div>Brick3: bd-reactive-worker-3:/srv/glus<wbr>ter/data/small/brick1</div>
                  <div>Brick4: bd-reactive-worker-4:/srv/glus<wbr>ter/data/small/brick1</div>
                  <div>Brick5: bd-reactive-worker-1:/srv/glus<wbr>ter/data/small/brick2</div>
                  <div>Brick6: bd-reactive-worker-2:/srv/glus<wbr>ter/data/small/brick2</div>
                  <div>Brick7: bd-reactive-worker-3:/srv/glus<wbr>ter/data/small/brick2</div>
                  <div>Brick8: bd-reactive-worker-4:/srv/glus<wbr>ter/data/small/brick2</div>
                  <div>Brick9: bd-reactive-worker-1:/srv/glus<wbr>ter/data/small/brick3</div>
                  <div>Brick10: bd-reactive-worker-2:/srv/glus<wbr>ter/data/small/brick3</div>
                  <div>Brick11: bd-reactive-worker-3:/srv/glus<wbr>ter/data/small/brick3</div>
                  <div>Brick12: bd-reactive-worker-4:/srv/glus<wbr>ter/data/small/brick3</div>
                  <div>Options Reconfigured:</div>
                  <div>nfs.disable: on</div>
                  <div>performance.readdir-ahead: on</div>
                  <div>transport.address-family: inet</div>
                  <div>cluster.data-self-heal: off</div>
                  <div>cluster.entry-self-heal: off</div>
                  <div>cluster.metadata-self-heal: off</div>
                  <div>cluster.self-heal-daemon: off</div>
                  <div><br>
                  </div>
                  <div><br>
                  </div>
                  <div>&gt;sudo cat /var/log/glusterfs/cli.log</div>
                  <div>
                    <div>[2016-12-26 13:41:11.422850] I [cli.c:730:main]
                      0-cli: Started running gluster with version 3.8.5</div>
                    <div>[2016-12-26 13:41:11.428970] I
                      [cli-cmd-volume.c:1828:cli_che<wbr>ck_gsync_present]
                      0-: geo-replication not installed</div>
                    <div>[2016-12-26 13:41:11.429308] I [MSGID: 101190]
                      [event-epoll.c:628:event_dispa<wbr>tch_epoll_worker]
                      0-epoll: Started thread with index 1</div>
                    <div>[2016-12-26 13:41:11.429360] I
                      [socket.c:2403:socket_event_ha<wbr>ndler]
                      0-transport: disconnecting now</div>
                    <div>[2016-12-26 13:41:11.430285] I
                      [socket.c:3391:socket_submit_r<wbr>equest]
                      0-glusterfs: not connected (priv-&gt;connected =
                      0)</div>
                    <div>[2016-12-26 13:41:11.430320] W
                      [rpc-clnt.c:1640:rpc_clnt_subm<wbr>it]
                      0-glusterfs: failed to submit rpc-request (XID:
                      0x1 Program: Gluster CLI, ProgVers: 2, Proc: 5) to
                      rpc-transport (glusterfs)</div>
                    <div>[2016-12-26 13:41:24.967491] I [cli.c:730:main]
                      0-cli: Started running gluster with version 3.8.5</div>
                    <div>[2016-12-26 13:41:24.972755] I
                      [cli-cmd-volume.c:1828:cli_che<wbr>ck_gsync_present]
                      0-: geo-replication not installed</div>
                    <div>[2016-12-26 13:41:24.973014] I [MSGID: 101190]
                      [event-epoll.c:628:event_dispa<wbr>tch_epoll_worker]
                      0-epoll: Started thread with index 1</div>
                    <div>[2016-12-26 13:41:24.973080] I
                      [socket.c:2403:socket_event_ha<wbr>ndler]
                      0-transport: disconnecting now</div>
                    <div>[2016-12-26 13:41:24.973552] I
                      [cli-rpc-ops.c:817:gf_cli_get_<wbr>volume_cbk]
                      0-cli: Received resp to get vol: 0</div>
                    <div>[2016-12-26 13:41:24.976419] I
                      [cli-rpc-ops.c:817:gf_cli_get_<wbr>volume_cbk]
                      0-cli: Received resp to get vol: 0</div>
                    <div>[2016-12-26 13:41:24.976957] I
                      [cli-rpc-ops.c:817:gf_cli_get_<wbr>volume_cbk]
                      0-cli: Received resp to get vol: 0</div>
                    <div>[2016-12-26 13:41:24.976985] I
                      [input.c:31:cli_batch] 0-: Exiting with: 0</div>
                  </div>
                  <div><br>
                  </div>
                  <div>
                    <div>&gt;sudo cat /var/log/glusterfs/srv-data-sm<wbr>all.log</div>
                    <div>[2016-12-26 13:46:53.407541] W
                      [socket.c:590:__socket_rwv] 0-glusterfs: readv on
                      <a href="http://172.52.0.4:24007" target="_blank">172.52.0.4:24007</a>
                      failed (No data available)</div>
                    <div>[2016-12-26 13:46:53.407997] E
                      [glusterfsd-mgmt.c:1902:mgmt_r<wbr>pc_notify]
                      0-glusterfsd-mgmt: failed to connect with
                      remote-host: 172.52.0.4 (No data available)</div>
                    <div>[2016-12-26 13:46:53.408079] I
                      [glusterfsd-mgmt.c:1919:mgmt_r<wbr>pc_notify]
                      0-glusterfsd-mgmt: Exhausted all volfile servers</div>
                    <div>[2016-12-26 13:46:54.736497] I
                      [rpc-clnt.c:1947:rpc_clnt_reco<wbr>nfig]
                      0-reactive_small-client-3: changing port to 49155
                      (from 0)</div>
                    <div>[2016-12-26 13:46:54.738710] I
                      [rpc-clnt.c:1947:rpc_clnt_reco<wbr>nfig]
                      0-reactive_small-client-7: changing port to 49156
                      (from 0)</div>
                    <div>[2016-12-26 13:46:54.738766] I
                      [rpc-clnt.c:1947:rpc_clnt_reco<wbr>nfig]
                      0-reactive_small-client-11: changing port to 49157
                      (from 0)</div>
                    <div>[2016-12-26 13:46:54.742911] I [MSGID: 114057]
                      [client-handshake.c:1446:selec<wbr>t_server_supported_programs]
                      0-reactive_small-client-3: Using Program GlusterFS
                      3.3, Num (1298437), Version (330)</div>
                    <div>[2016-12-26 13:46:54.743199] I [MSGID: 114057]
                      [client-handshake.c:1446:selec<wbr>t_server_supported_programs]
                      0-reactive_small-client-7: Using Program GlusterFS
                      3.3, Num (1298437), Version (330)</div>
                    <div>[2016-12-26 13:46:54.743476] I [MSGID: 114046]
                      [client-handshake.c:1222:clien<wbr>t_setvolume_cbk]
                      0-reactive_small-client-3: Connected to
                      reactive_small-client-3, attached to remote volume
                      &#39;/srv/gluster/data/small/brick<wbr>1&#39;.</div>
                    <div>[2016-12-26 13:46:54.743488] I [MSGID: 114047]
                      [client-handshake.c:1233:clien<wbr>t_setvolume_cbk]
                      0-reactive_small-client-3: Server and Client
                      lk-version numbers are not same, reopening the fds</div>
                    <div>[2016-12-26 13:46:54.743603] I [MSGID: 114046]
                      [client-handshake.c:1222:clien<wbr>t_setvolume_cbk]
                      0-reactive_small-client-7: Connected to
                      reactive_small-client-7, attached to remote volume
                      &#39;/srv/gluster/data/small/brick<wbr>2&#39;.</div>
                    <div>[2016-12-26 13:46:54.743614] I [MSGID: 114047]
                      [client-handshake.c:1233:clien<wbr>t_setvolume_cbk]
                      0-reactive_small-client-7: Server and Client
                      lk-version numbers are not same, reopening the fds</div>
                    <div>[2016-12-26 13:46:54.743673] I [MSGID: 108002]
                      [afr-common.c:4371:afr_notify]
                      0-reactive_small-replicate-2: Client-quorum is met</div>
                    <div>[2016-12-26 13:46:54.743694] I [MSGID: 114035]
                      [client-handshake.c:201:client<wbr>_set_lk_version_cbk]
                      0-reactive_small-client-3: Server lk version = 1</div>
                    <div>[2016-12-26 13:46:54.743798] I [MSGID: 114035]
                      [client-handshake.c:201:client<wbr>_set_lk_version_cbk]
                      0-reactive_small-client-7: Server lk version = 1</div>
                    <div>[2016-12-26 13:46:54.745749] I [MSGID: 114057]
                      [client-handshake.c:1446:selec<wbr>t_server_supported_programs]
                      0-reactive_small-client-11: Using Program
                      GlusterFS 3.3, Num (1298437), Version (330)</div>
                    <div>[2016-12-26 13:46:54.746211] I [MSGID: 114046]
                      [client-handshake.c:1222:clien<wbr>t_setvolume_cbk]
                      0-reactive_small-client-11: Connected to
                      reactive_small-client-11, attached to remote
                      volume &#39;/srv/gluster/data/small/brick<wbr>3&#39;.</div>
                    <div>[2016-12-26 13:46:54.746226] I [MSGID: 114047]
                      [client-handshake.c:1233:clien<wbr>t_setvolume_cbk]
                      0-reactive_small-client-11: Server and Client
                      lk-version numbers are not same, reopening the fds</div>
                    <div>[2016-12-26 13:46:54.746288] I [MSGID: 108002]
                      [afr-common.c:4371:afr_notify]
                      0-reactive_small-replicate-3: Client-quorum is met</div>
                    <div>[2016-12-26 13:46:54.746403] I [MSGID: 114035]
                      [client-handshake.c:201:client<wbr>_set_lk_version_cbk]
                      0-reactive_small-client-11: Server lk version = 1</div>
                    <div>[2016-12-26 13:46:54.765923] E [MSGID: 114058]
                      [client-handshake.c:1533:clien<wbr>t_query_portmap_cbk]
                      0-reactive_small-client-2: failed to get the port
                      number for remote subvolume. Please run &#39;gluster
                      volume status&#39; on server to see if brick process
                      is running.</div>
                    <div>[2016-12-26 13:46:54.765951] E [MSGID: 114058]
                      [client-handshake.c:1533:clien<wbr>t_query_portmap_cbk]
                      0-reactive_small-client-10: failed to get the port
                      number for remote subvolume. Please run &#39;gluster
                      volume status&#39; on server to see if brick process
                      is running.</div>
                    <div>[2016-12-26 13:46:54.765986] E [MSGID: 114058]
                      [client-handshake.c:1533:clien<wbr>t_query_portmap_cbk]
                      0-reactive_small-client-6: failed to get the port
                      number for remote subvolume. Please run &#39;gluster
                      volume status&#39; on server to see if brick process
                      is running.</div>
                    <div>[2016-12-26 13:46:54.766001] I [MSGID: 114018]
                      [client.c:2280:client_rpc_noti<wbr>fy]
                      0-reactive_small-client-2: disconnected from
                      reactive_small-client-2. Client process will keep
                      trying to connect to glusterd until brick&#39;s port
                      is available</div>
                    <div>[2016-12-26 13:46:54.766013] I [MSGID: 114018]
                      [client.c:2280:client_rpc_noti<wbr>fy]
                      0-reactive_small-client-10: disconnected from
                      reactive_small-client-10. Client process will keep
                      trying to connect to glusterd until brick&#39;s port
                      is available</div>
                    <div>[2016-12-26 13:46:54.766032] I [MSGID: 114018]
                      [client.c:2280:client_rpc_noti<wbr>fy]
                      0-reactive_small-client-6: disconnected from
                      reactive_small-client-6. Client process will keep
                      trying to connect to glusterd until brick&#39;s port
                      is available</div>
                    <div>[2016-12-26 13:46:57.019722] I
                      [rpc-clnt.c:1947:rpc_clnt_reco<wbr>nfig]
                      0-reactive_small-client-2: changing port to 49155
                      (from 0)</div>
                    <div>[2016-12-26 13:46:57.021611] I
                      [rpc-clnt.c:1947:rpc_clnt_reco<wbr>nfig]
                      0-reactive_small-client-6: changing port to 49156
                      (from 0)</div>
                    <div>[2016-12-26 13:46:57.025630] I [MSGID: 114057]
                      [client-handshake.c:1446:selec<wbr>t_server_supported_programs]
                      0-reactive_small-client-2: Using Program GlusterFS
                      3.3, Num (1298437), Version (330)</div>
                    <div>[2016-12-26 13:46:57.026240] I [MSGID: 114046]
                      [client-handshake.c:1222:clien<wbr>t_setvolume_cbk]
                      0-reactive_small-client-2: Connected to
                      reactive_small-client-2, attached to remote volume
                      &#39;/srv/gluster/data/small/brick<wbr>1&#39;.</div>
                    <div>[2016-12-26 13:46:57.026252] I [MSGID: 114047]
                      [client-handshake.c:1233:clien<wbr>t_setvolume_cbk]
                      0-reactive_small-client-2: Server and Client
                      lk-version numbers are not same, reopening the fds</div>
                    <div>[2016-12-26 13:46:57.026312] I
                      [rpc-clnt.c:1947:rpc_clnt_reco<wbr>nfig]
                      0-reactive_small-client-10: changing port to 49157
                      (from 0)</div>
                    <div>[2016-12-26 13:46:57.027737] I [MSGID: 114035]
                      [client-handshake.c:201:client<wbr>_set_lk_version_cbk]
                      0-reactive_small-client-2: Server lk version = 1</div>
                    <div>[2016-12-26 13:46:57.029251] I [MSGID: 114057]
                      [client-handshake.c:1446:selec<wbr>t_server_supported_programs]
                      0-reactive_small-client-6: Using Program GlusterFS
                      3.3, Num (1298437), Version (330)</div>
                    <div>[2016-12-26 13:46:57.029781] I [MSGID: 114046]
                      [client-handshake.c:1222:clien<wbr>t_setvolume_cbk]
                      0-reactive_small-client-6: Connected to
                      reactive_small-client-6, attached to remote volume
                      &#39;/srv/gluster/data/small/brick<wbr>2&#39;.</div>
                    <div>[2016-12-26 13:46:57.029798] I [MSGID: 114047]
                      [client-handshake.c:1233:clien<wbr>t_setvolume_cbk]
                      0-reactive_small-client-6: Server and Client
                      lk-version numbers are not same, reopening the fds</div>
                    <div>[2016-12-26 13:46:57.030194] I [MSGID: 114035]
                      [client-handshake.c:201:client<wbr>_set_lk_version_cbk]
                      0-reactive_small-client-6: Server lk version = 1</div>
                    <div>[2016-12-26 13:46:57.031709] I [MSGID: 114057]
                      [client-handshake.c:1446:selec<wbr>t_server_supported_programs]
                      0-reactive_small-client-10: Using Program
                      GlusterFS 3.3, Num (1298437), Version (330)</div>
                    <div>[2016-12-26 13:46:57.032215] I [MSGID: 114046]
                      [client-handshake.c:1222:clien<wbr>t_setvolume_cbk]
                      0-reactive_small-client-10: Connected to
                      reactive_small-client-10, attached to remote
                      volume &#39;/srv/gluster/data/small/brick<wbr>3&#39;.</div>
                    <div>[2016-12-26 13:46:57.032224] I [MSGID: 114047]
                      [client-handshake.c:1233:clien<wbr>t_setvolume_cbk]
                      0-reactive_small-client-10: Server and Client
                      lk-version numbers are not same, reopening the fds</div>
                    <div>[2016-12-26 13:46:57.032475] I [MSGID: 114035]
                      [client-handshake.c:201:client<wbr>_set_lk_version_cbk]
                      0-reactive_small-client-10: Server lk version = 1</div>
                    <div>[2016-12-26 13:47:04.032294] I
                      [glusterfsd-mgmt.c:1596:mgmt_g<wbr>etspec_cbk]
                      0-glusterfs: No change in volfile, continuing</div>
                  </div>
                  <div>
                    <div>[2016-12-26 13:59:01.935684] I [MSGID: 108031]
                      [afr-common.c:2067:afr_local_d<wbr>iscovery_cbk]
                      0-reactive_small-replicate-0: selecting local
                      read_child reactive_small-client-1</div>
                    <div>[2016-12-26 13:59:01.937790] I [MSGID: 108031]
                      [afr-common.c:2067:afr_local_d<wbr>iscovery_cbk]
                      0-reactive_small-replicate-1: selecting local
                      read_child reactive_small-client-5</div>
                    <div>[2016-12-26 13:59:01.938727] I [MSGID: 108031]
                      [afr-common.c:2067:afr_local_d<wbr>iscovery_cbk]
                      0-reactive_small-replicate-3: selecting local
                      read_child reactive_small-client-9</div>
                    <div>[2016-12-26 13:59:09.566572] I
                      [dict.c:462:dict_get]
                      (--&gt;/usr/lib64/glusterfs/3.8.5<wbr>/xlator/debug/io-stats.so(+0x1<wbr>3628)
                      [0x7fada9d4c628] --&gt;/usr/lib64/glusterfs/3.8.5/<wbr>xlator/system/posix-acl.so(+0x<wbr>9d0b)
                      [0x7fada9b30d0b] --&gt;/lib64/libglusterfs.so.0(di<wbr>ct_get+0xec)
                      [0x7fadb913933c] ) 0-dict: !this ||
                      key=system.posix_acl_access [Invalid argument]</div>
                    <div>[2016-12-26 13:59:09.566730] I
                      [dict.c:462:dict_get]
                      (--&gt;/usr/lib64/glusterfs/3.8.5<wbr>/xlator/debug/io-stats.so(+0x1<wbr>3628)
                      [0x7fada9d4c628] --&gt;/usr/lib64/glusterfs/3.8.5/<wbr>xlator/system/posix-acl.so(+0x<wbr>9d61)
                      [0x7fada9b30d61] --&gt;/lib64/libglusterfs.so.0(di<wbr>ct_get+0xec)
                      [0x7fadb913933c] ) 0-dict: !this ||
                      key=system.posix_acl_default [Invalid argument]</div>
                  </div>
                  <div><br>
                  </div>
                  <div><br>
                  </div>
                  <div><br>
                  </div>
                  <div>&gt;sudo cat /var/log/glusterfs/etc-gluster<wbr>fs-glusterd.vol.log</div>
                  <div>
                    <div>[2016-12-26 13:46:37.511891] I [MSGID: 106487]
                      [glusterd-handler.c:1474:__glu<wbr>sterd_handle_cli_list_friends]
                      0-glusterd: Received cli list req</div>
                    <div>[2016-12-26 13:46:53.407000] W
                      [socket.c:590:__socket_rwv] 0-management: readv on
                      <a href="http://172.52.0.4:24007" target="_blank">172.52.0.4:24007</a>
                      failed (No data available)</div>
                    <div>[2016-12-26 13:46:53.407171] I [MSGID: 106004]
                      [glusterd-handler.c:5219:__glu<wbr>sterd_peer_rpc_notify]
                      0-management: Peer &lt;bd-reactive-worker-1&gt;
                      (&lt;59500674-750f-4e16-aeea-4a99<wbr>fd67218a&gt;),
                      in state &lt;Peer in Cluster&gt;, has disconnected
                      from glusterd.</div>
                    <div>[2016-12-26 13:46:53.407532] W
                      [glusterd-locks.c:675:glusterd<wbr>_mgmt_v3_unlock]
                      (--&gt;/usr/lib64/glusterfs/3.8.5<wbr>/xlator/mgmt/glusterd.so(+0x1d<wbr>e5c)
                      [0x7f467cda0e5c] --&gt;/usr/lib64/glusterfs/3.8.5/<wbr>xlator/mgmt/glusterd.so(+0x27a<wbr>08)
                      [0x7f467cdaaa08] --&gt;/usr/lib64/glusterfs/3.8.5/<wbr>xlator/mgmt/glusterd.so(+0xd07<wbr>fa)
                      [0x7f467ce537fa] ) 0-management: Lock for vol
                      reactive_large not held</div>
                    <div>[2016-12-26 13:46:53.407575] W [MSGID: 106118]
                      [glusterd-handler.c:5241:__glu<wbr>sterd_peer_rpc_notify]
                      0-management: Lock not released for reactive_large</div>
                    <div>[2016-12-26 13:46:53.407694] W
                      [glusterd-locks.c:675:glusterd<wbr>_mgmt_v3_unlock]
                      (--&gt;/usr/lib64/glusterfs/3.8.5<wbr>/xlator/mgmt/glusterd.so(+0x1d<wbr>e5c)
                      [0x7f467cda0e5c] --&gt;/usr/lib64/glusterfs/3.8.5/<wbr>xlator/mgmt/glusterd.so(+0x27a<wbr>08)
                      [0x7f467cdaaa08] --&gt;/usr/lib64/glusterfs/3.8.5/<wbr>xlator/mgmt/glusterd.so(+0xd07<wbr>fa)
                      [0x7f467ce537fa] ) 0-management: Lock for vol
                      reactive_small not held</div>
                    <div>[2016-12-26 13:46:53.407723] W [MSGID: 106118]
                      [glusterd-handler.c:5241:__glu<wbr>sterd_peer_rpc_notify]
                      0-management: Lock not released for reactive_small</div>
                    <div>[2016-12-26 13:46:53.485185] I [MSGID: 106163]
                      [glusterd-handshake.c:1271:__g<wbr>lusterd_mgmt_hndsk_versions_ac<wbr>k]
                      0-management: using the op-version 30800</div>
                    <div>[2016-12-26 13:46:53.489760] I [MSGID: 106490]
                      [glusterd-handler.c:2608:__glu<wbr>sterd_handle_incoming_friend_r<wbr>eq]
                      0-glusterd: Received probe from uuid:
                      59500674-750f-4e16-aeea-4a99fd<wbr>67218a</div>
                    <div>[2016-12-26 13:46:53.529568] W
                      [glusterfsd.c:1327:cleanup_and<wbr>_exit]
                      (--&gt;/lib64/libpthread.so.0(+0x<wbr>7dc5)
                      [0x7f4687483dc5] --&gt;/usr/sbin/glusterd(glusterf<wbr>s_sigwaiter+0xe5)
                      [0x7f4688b17cd5] --&gt;/usr/sbin/glusterd(cleanup_<wbr>and_exit+0x6b)
                      [0x7f4688b17b4b] ) 0-: received signum (15),
                      shutting down</div>
                    <div>[2016-12-26 13:46:53.562392] I [MSGID: 100030]
                      [glusterfsd.c:2454:main] 0-/usr/sbin/glusterd:
                      Started running /usr/sbin/glusterd version 3.8.5
                      (args: /usr/sbin/glusterd -p /var/run/glusterd.pid
                      --log-level INFO)</div>
                    <div>[2016-12-26 13:46:53.569917] I [MSGID: 106478]
                      [glusterd.c:1379:init] 0-management: Maximum
                      allowed open file descriptors set to 65536</div>
                    <div>[2016-12-26 13:46:53.569959] I [MSGID: 106479]
                      [glusterd.c:1428:init] 0-management: Using
                      /var/lib/glusterd as working directory</div>
                    <div>[2016-12-26 13:46:53.575301] E
                      [rpc-transport.c:287:rpc_trans<wbr>port_load]
                      0-rpc-transport: /usr/lib64/glusterfs/3.8.5/rpc<wbr>-transport/rdma.so:
                      cannot open shared object file: No such file or
                      directory</div>
                    <div>[2016-12-26 13:46:53.575327] W
                      [rpc-transport.c:291:rpc_trans<wbr>port_load]
                      0-rpc-transport: volume &#39;rdma.management&#39;:
                      transport-type &#39;rdma&#39; is not valid or not found on
                      this machine</div>
                    <div>[2016-12-26 13:46:53.575335] W
                      [rpcsvc.c:1638:rpcsvc_create_l<wbr>istener]
                      0-rpc-service: cannot create listener, initing the
                      transport failed</div>
                    <div>[2016-12-26 13:46:53.575341] E [MSGID: 106243]
                      [glusterd.c:1652:init] 0-management: creation of 1
                      listeners failed, continuing with succeeded
                      transport</div>
                    <div>[2016-12-26 13:46:53.576843] I [MSGID: 106228]
                      [glusterd.c:429:glusterd_check<wbr>_gsync_present]
                      0-glusterd: geo-replication module not installed
                      in the system [No such file or directory]</div>
                    <div>[2016-12-26 13:46:53.577209] I [MSGID: 106513]
                      [glusterd-store.c:2098:gluster<wbr>d_restore_op_version]
                      0-glusterd: retrieved op-version: 30800</div>
                    <div>[2016-12-26 13:46:53.720253] I [MSGID: 106498]
                      [glusterd-handler.c:3649:glust<wbr>erd_friend_add_from_peerinfo]
                      0-management: connect returned 0</div>
                    <div>[2016-12-26 13:46:53.720477] I
                      [rpc-clnt.c:1033:rpc_clnt_conn<wbr>ection_init]
                      0-management: setting frame-timeout to 600</div>
                    <div>[2016-12-26 13:46:53.723273] I
                      [rpc-clnt.c:1033:rpc_clnt_conn<wbr>ection_init]
                      0-management: setting frame-timeout to 600</div>
                    <div>[2016-12-26 13:46:53.725591] I
                      [rpc-clnt.c:1033:rpc_clnt_conn<wbr>ection_init]
                      0-management: setting frame-timeout to 600</div>
                    <div>The message &quot;I [MSGID: 106498]
                      [glusterd-handler.c:3649:glust<wbr>erd_friend_add_from_peerinfo]
                      0-management: connect returned 0&quot; repeated 2 times
                      between [2016-12-26 13:46:53.720253] and
                      [2016-12-26 13:46:53.720391]</div>
                    <div>[2016-12-26 13:46:53.728948] I [MSGID: 106544]
                      [glusterd.c:155:glusterd_uuid_<wbr>init]
                      0-management: retrieved UUID:
                      2767e4e8-e203-4f77-8087-298c5a<wbr>0f862f</div>
                    <div>Final graph:</div>
                    <div>+-----------------------------<wbr>------------------------------<wbr>-------------------+</div>
                    <div>  1: volume management</div>
                    <div>  2:     type mgmt/glusterd</div>
                    <div>  3:     option rpc-auth.auth-glusterfs on</div>
                    <div>  4:     option rpc-auth.auth-unix on</div>
                    <div>  5:     option rpc-auth.auth-null on</div>
                    <div>  6:     option rpc-auth-allow-insecure on</div>
                    <div>  7:     option transport.socket.listen-backlo<wbr>g
                      128</div>
                    <div>  8:     option event-threads 1</div>
                    <div>  9:     option ping-timeout 0</div>
                    <div> 10:     option transport.socket.read-fail-log
                      off</div>
                    <div> 11:     option transport.socket.keepalive-int<wbr>erval
                      2</div>
                    <div> 12:     option transport.socket.keepalive-tim<wbr>e
                      10</div>
                    <div> 13:     option transport-type rdma</div>
                    <div> 14:     option working-directory
                      /var/lib/glusterd</div>
                    <div> 15: end-volume</div>
                    <div> 16:</div>
                    <div>+-----------------------------<wbr>------------------------------<wbr>-------------------+</div>
                    <div>[2016-12-26 13:46:53.732358] I [MSGID: 101190]
                      [event-epoll.c:628:event_dispa<wbr>tch_epoll_worker]
                      0-epoll: Started thread with index 1</div>
                    <div>[2016-12-26 13:46:53.739916] I [MSGID: 106163]
                      [glusterd-handshake.c:1271:__g<wbr>lusterd_mgmt_hndsk_versions_ac<wbr>k]
                      0-management: using the op-version 30800</div>
                    <div>[2016-12-26 13:46:54.735745] I [MSGID: 106163]
                      [glusterd-handshake.c:1271:__g<wbr>lusterd_mgmt_hndsk_versions_ac<wbr>k]
                      0-management: using the op-version 30800</div>
                    <div>[2016-12-26 13:46:54.743668] I [MSGID: 106490]
                      [glusterd-handler.c:2608:__glu<wbr>sterd_handle_incoming_friend_r<wbr>eq]
                      0-glusterd: Received probe from uuid:
                      854a4235-dff0-4ae8-8589-72aa6c<wbr>e6a35f</div>
                    <div>[2016-12-26 13:46:54.745380] I [MSGID: 106493]
                      [glusterd-handler.c:3852:glust<wbr>erd_xfer_friend_add_resp]
                      0-glusterd: Responded to bd-reactive-worker-4 (0),
                      ret: 0, op_ret: 0</div>
                    <div>[2016-12-26 13:46:54.752307] I
                      [rpc-clnt.c:1033:rpc_clnt_conn<wbr>ection_init]
                      0-nfs: setting frame-timeout to 600</div>
                    <div>[2016-12-26 13:46:54.752443] I [MSGID: 106132]
                      [glusterd-proc-mgmt.c:83:glust<wbr>erd_proc_stop]
                      0-management: nfs already stopped</div>
                    <div>[2016-12-26 13:46:54.752472] I [MSGID: 106568]
                      [glusterd-svc-mgmt.c:228:glust<wbr>erd_svc_stop]
                      0-management: nfs service is stopped</div>
                    <div>[2016-12-26 13:46:54.752849] I
                      [rpc-clnt.c:1033:rpc_clnt_conn<wbr>ection_init]
                      0-glustershd: setting frame-timeout to 600</div>
                    <div>[2016-12-26 13:46:54.753881] I [MSGID: 106568]
                      [glusterd-proc-mgmt.c:87:glust<wbr>erd_proc_stop]
                      0-management: Stopping glustershd daemon running
                      in pid: 17578</div>
                    <div>[2016-12-26 13:46:55.754166] I [MSGID: 106568]
                      [glusterd-svc-mgmt.c:228:glust<wbr>erd_svc_stop]
                      0-management: glustershd service is stopped</div>
                    <div>[2016-12-26 13:46:55.754226] I [MSGID: 106567]
                      [glusterd-svc-mgmt.c:196:glust<wbr>erd_svc_start]
                      0-management: Starting glustershd service</div>
                    <div>[2016-12-26 13:46:55.765127] W
                      [socket.c:3065:socket_connect] 0-glustershd:
                      Ignore failed connection attempt on , (No such
                      file or directory)</div>
                    <div>[2016-12-26 13:46:55.765272] I
                      [rpc-clnt.c:1033:rpc_clnt_conn<wbr>ection_init]
                      0-quotad: setting frame-timeout to 600</div>
                    <div>[2016-12-26 13:46:55.765511] I [MSGID: 106132]
                      [glusterd-proc-mgmt.c:83:glust<wbr>erd_proc_stop]
                      0-management: quotad already stopped</div>
                    <div>[2016-12-26 13:46:55.765583] I [MSGID: 106568]
                      [glusterd-svc-mgmt.c:228:glust<wbr>erd_svc_stop]
                      0-management: quotad service is stopped</div>
                    <div>[2016-12-26 13:46:55.765680] I
                      [rpc-clnt.c:1033:rpc_clnt_conn<wbr>ection_init]
                      0-bitd: setting frame-timeout to 600</div>
                    <div>[2016-12-26 13:46:55.765876] I [MSGID: 106132]
                      [glusterd-proc-mgmt.c:83:glust<wbr>erd_proc_stop]
                      0-management: bitd already stopped</div>
                    <div>[2016-12-26 13:46:55.765922] I [MSGID: 106568]
                      [glusterd-svc-mgmt.c:228:glust<wbr>erd_svc_stop]
                      0-management: bitd service is stopped</div>
                    <div>[2016-12-26 13:46:55.766041] I
                      [rpc-clnt.c:1033:rpc_clnt_conn<wbr>ection_init]
                      0-scrub: setting frame-timeout to 600</div>
                    <div>[2016-12-26 13:46:55.766312] I [MSGID: 106132]
                      [glusterd-proc-mgmt.c:83:glust<wbr>erd_proc_stop]
                      0-management: scrub already stopped</div>
                    <div>[2016-12-26 13:46:55.766383] I [MSGID: 106568]
                      [glusterd-svc-mgmt.c:228:glust<wbr>erd_svc_stop]
                      0-management: scrub service is stopped</div>
                    <div>[2016-12-26 13:46:55.766613] I
                      [rpc-clnt.c:1033:rpc_clnt_conn<wbr>ection_init]
                      0-management: setting frame-timeout to 600</div>
                    <div>[2016-12-26 13:46:55.766878] I
                      [rpc-clnt.c:1033:rpc_clnt_conn<wbr>ection_init]
                      0-management: setting frame-timeout to 600</div>
                    <div>[2016-12-26 13:46:55.767109] I
                      [rpc-clnt.c:1033:rpc_clnt_conn<wbr>ection_init]
                      0-management: setting frame-timeout to 600</div>
                    <div>[2016-12-26 13:46:55.767252] I
                      [rpc-clnt.c:1033:rpc_clnt_conn<wbr>ection_init]
                      0-management: setting frame-timeout to 600</div>
                    <div>[2016-12-26 13:46:55.767420] I
                      [rpc-clnt.c:1033:rpc_clnt_conn<wbr>ection_init]
                      0-management: setting frame-timeout to 600</div>
                    <div>[2016-12-26 13:46:55.767670] I
                      [rpc-clnt.c:1033:rpc_clnt_conn<wbr>ection_init]
                      0-management: setting frame-timeout to 600</div>
                    <div>[2016-12-26 13:46:55.767800] I
                      [rpc-clnt.c:1033:rpc_clnt_conn<wbr>ection_init]
                      0-snapd: setting frame-timeout to 600</div>
                    <div>[2016-12-26 13:46:55.767916] I
                      [rpc-clnt.c:1033:rpc_clnt_conn<wbr>ection_init]
                      0-snapd: setting frame-timeout to 600</div>
                    <div>[2016-12-26 13:46:55.768115] I [MSGID: 106492]
                      [glusterd-handler.c:2784:__glu<wbr>sterd_handle_friend_update]
                      0-glusterd: Received friend update from uuid:
                      854a4235-dff0-4ae8-8589-72aa6c<wbr>e6a35f</div>
                    <div>[2016-12-26 13:46:55.769849] I [MSGID: 106502]
                      [glusterd-handler.c:2829:__glu<wbr>sterd_handle_friend_update]
                      0-management: Received my uuid as Friend</div>
                    <div>[2016-12-26 13:46:55.771341] I [MSGID: 106490]
                      [glusterd-handler.c:2608:__glu<wbr>sterd_handle_incoming_friend_r<wbr>eq]
                      0-glusterd: Received probe from uuid:
                      9885f122-6242-4ad8-96ee-3a8e25<wbr>c2d98e</div>
                    <div>[2016-12-26 13:46:55.772677] I [MSGID: 106493]
                      [glusterd-handler.c:3852:glust<wbr>erd_xfer_friend_add_resp]
                      0-glusterd: Responded to bd-reactive-worker-3 (0),
                      ret: 0, op_ret: 0</div>
                    <div>[2016-12-26 13:46:55.775913] I [MSGID: 106132]
                      [glusterd-proc-mgmt.c:83:glust<wbr>erd_proc_stop]
                      0-management: nfs already stopped</div>
                    <div>[2016-12-26 13:46:55.775946] I [MSGID: 106568]
                      [glusterd-svc-mgmt.c:228:glust<wbr>erd_svc_stop]
                      0-management: nfs service is stopped</div>
                    <div>[2016-12-26 13:46:55.777210] I [MSGID: 106568]
                      [glusterd-proc-mgmt.c:87:glust<wbr>erd_proc_stop]
                      0-management: Stopping glustershd daemon running
                      in pid: 17762</div>
                    <div>[2016-12-26 13:46:56.778124] I [MSGID: 106568]
                      [glusterd-svc-mgmt.c:228:glust<wbr>erd_svc_stop]
                      0-management: glustershd service is stopped</div>
                    <div>[2016-12-26 13:46:56.778194] I [MSGID: 106567]
                      [glusterd-svc-mgmt.c:196:glust<wbr>erd_svc_start]
                      0-management: Starting glustershd service</div>
                    <div>[2016-12-26 13:46:56.781946] I [MSGID: 106132]
                      [glusterd-proc-mgmt.c:83:glust<wbr>erd_proc_stop]
                      0-management: quotad already stopped</div>
                    <div>[2016-12-26 13:46:56.781976] I [MSGID: 106568]
                      [glusterd-svc-mgmt.c:228:glust<wbr>erd_svc_stop]
                      0-management: quotad service is stopped</div>
                    <div>[2016-12-26 13:46:56.782024] I [MSGID: 106132]
                      [glusterd-proc-mgmt.c:83:glust<wbr>erd_proc_stop]
                      0-management: bitd already stopped</div>
                    <div>[2016-12-26 13:46:56.782046] I [MSGID: 106568]
                      [glusterd-svc-mgmt.c:228:glust<wbr>erd_svc_stop]
                      0-management: bitd service is stopped</div>
                    <div>[2016-12-26 13:46:56.782075] I [MSGID: 106132]
                      [glusterd-proc-mgmt.c:83:glust<wbr>erd_proc_stop]
                      0-management: scrub already stopped</div>
                    <div>[2016-12-26 13:46:56.782085] I [MSGID: 106568]
                      [glusterd-svc-mgmt.c:228:glust<wbr>erd_svc_stop]
                      0-management: scrub service is stopped</div>
                    <div>[2016-12-26 13:46:56.785199] I [MSGID: 106493]
                      [glusterd-rpc-ops.c:476:__glus<wbr>terd_friend_add_cbk]
                      0-glusterd: Received ACC from uuid:
                      854a4235-dff0-4ae8-8589-72aa6c<wbr>e6a35f, host:
                      bd-reactive-worker-4, port: 0</div>
                    <div>[2016-12-26 13:46:56.789916] I [MSGID: 106492]
                      [glusterd-handler.c:2784:__glu<wbr>sterd_handle_friend_update]
                      0-glusterd: Received friend update from uuid:
                      854a4235-dff0-4ae8-8589-72aa6c<wbr>e6a35f</div>
                    <div>[2016-12-26 13:46:56.791664] I [MSGID: 106502]
                      [glusterd-handler.c:2829:__glu<wbr>sterd_handle_friend_update]
                      0-management: Received my uuid as Friend</div>
                    <div>[2016-12-26 13:46:56.795667] I [MSGID: 106492]
                      [glusterd-handler.c:2784:__glu<wbr>sterd_handle_friend_update]
                      0-glusterd: Received friend update from uuid:
                      9885f122-6242-4ad8-96ee-3a8e25<wbr>c2d98e</div>
                    <div>[2016-12-26 13:46:56.801246] I [MSGID: 106502]
                      [glusterd-handler.c:2829:__glu<wbr>sterd_handle_friend_update]
                      0-management: Received my uuid as Friend</div>
                    <div>[2016-12-26 13:46:56.801309] I [MSGID: 106493]
                      [glusterd-rpc-ops.c:691:__glus<wbr>terd_friend_update_cbk]
                      0-management: Received ACC from uuid:
                      854a4235-dff0-4ae8-8589-72aa6c<wbr>e6a35f</div>
                    <div>[2016-12-26 13:46:56.801334] I [MSGID: 106493]
                      [glusterd-rpc-ops.c:476:__glus<wbr>terd_friend_add_cbk]
                      0-glusterd: Received ACC from uuid:
                      9885f122-6242-4ad8-96ee-3a8e25<wbr>c2d98e, host:
                      bd-reactive-worker-3, port: 0</div>
                    <div>[2016-12-26 13:46:56.802748] I [MSGID: 106492]
                      [glusterd-handler.c:2784:__glu<wbr>sterd_handle_friend_update]
                      0-glusterd: Received friend update from uuid:
                      9885f122-6242-4ad8-96ee-3a8e25<wbr>c2d98e</div>
                    <div>[2016-12-26 13:46:56.806969] I [MSGID: 106502]
                      [glusterd-handler.c:2829:__glu<wbr>sterd_handle_friend_update]
                      0-management: Received my uuid as Friend</div>
                    <div>[2016-12-26 13:46:56.808523] I [MSGID: 106493]
                      [glusterd-rpc-ops.c:691:__glus<wbr>terd_friend_update_cbk]
                      0-management: Received ACC from uuid:
                      9885f122-6242-4ad8-96ee-3a8e25<wbr>c2d98e</div>
                    <div>[2016-12-26 13:46:57.439163] I [MSGID: 106493]
                      [glusterd-rpc-ops.c:476:__glus<wbr>terd_friend_add_cbk]
                      0-glusterd: Received ACC from uuid:
                      59500674-750f-4e16-aeea-4a99fd<wbr>67218a, host:
                      bd-reactive-worker-1, port: 0</div>
                    <div>[2016-12-26 13:46:57.443271] I [MSGID: 106132]
                      [glusterd-proc-mgmt.c:83:glust<wbr>erd_proc_stop]
                      0-management: nfs already stopped</div>
                    <div>[2016-12-26 13:46:57.443317] I [MSGID: 106568]
                      [glusterd-svc-mgmt.c:228:glust<wbr>erd_svc_stop]
                      0-management: nfs service is stopped</div>
                    <div>[2016-12-26 13:46:57.444603] I [MSGID: 106568]
                      [glusterd-proc-mgmt.c:87:glust<wbr>erd_proc_stop]
                      0-management: Stopping glustershd daemon running
                      in pid: 17790</div>
                    <div>[2016-12-26 13:46:58.444802] I [MSGID: 106568]
                      [glusterd-svc-mgmt.c:228:glust<wbr>erd_svc_stop]
                      0-management: glustershd service is stopped</div>
                    <div>[2016-12-26 13:46:58.444867] I [MSGID: 106567]
                      [glusterd-svc-mgmt.c:196:glust<wbr>erd_svc_start]
                      0-management: Starting glustershd service</div>
                    <div>[2016-12-26 13:46:58.448158] W
                      [socket.c:3065:socket_connect] 0-glustershd:
                      Ignore failed connection attempt on , (No such
                      file or directory)</div>
                    <div>[2016-12-26 13:46:58.448293] I [MSGID: 106132]
                      [glusterd-proc-mgmt.c:83:glust<wbr>erd_proc_stop]
                      0-management: quotad already stopped</div>
                    <div>[2016-12-26 13:46:58.448322] I [MSGID: 106568]
                      [glusterd-svc-mgmt.c:228:glust<wbr>erd_svc_stop]
                      0-management: quotad service is stopped</div>
                    <div>[2016-12-26 13:46:58.448378] I [MSGID: 106132]
                      [glusterd-proc-mgmt.c:83:glust<wbr>erd_proc_stop]
                      0-management: bitd already stopped</div>
                    <div>[2016-12-26 13:46:58.448396] I [MSGID: 106568]
                      [glusterd-svc-mgmt.c:228:glust<wbr>erd_svc_stop]
                      0-management: bitd service is stopped</div>
                    <div>[2016-12-26 13:46:58.448447] I [MSGID: 106132]
                      [glusterd-proc-mgmt.c:83:glust<wbr>erd_proc_stop]
                      0-management: scrub already stopped</div>
                    <div>[2016-12-26 13:46:58.448464] I [MSGID: 106568]
                      [glusterd-svc-mgmt.c:228:glust<wbr>erd_svc_stop]
                      0-management: scrub service is stopped</div>
                    <div>[2016-12-26 13:46:58.448523] I [MSGID: 106487]
                      [glusterd-handler.c:1474:__glu<wbr>sterd_handle_cli_list_friends]
                      0-glusterd: Received cli list req</div>
                    <div>[2016-12-26 13:46:58.482252] I [MSGID: 106493]
                      [glusterd-rpc-ops.c:691:__glus<wbr>terd_friend_update_cbk]
                      0-management: Received ACC from uuid:
                      59500674-750f-4e16-aeea-4a99fd<wbr>67218a</div>
                    <div>[2016-12-26 13:46:58.484951] I [MSGID: 106163]
                      [glusterd-handshake.c:1271:__g<wbr>lusterd_mgmt_hndsk_versions_ac<wbr>k]
                      0-management: using the op-version 30800</div>
                    <div>[2016-12-26 13:46:58.492305] I [MSGID: 106490]
                      [glusterd-handler.c:2608:__glu<wbr>sterd_handle_incoming_friend_r<wbr>eq]
                      0-glusterd: Received probe from uuid:
                      59500674-750f-4e16-aeea-4a99fd<wbr>67218a</div>
                    <div>[2016-12-26 13:46:58.493713] I [MSGID: 106493]
                      [glusterd-handler.c:3852:glust<wbr>erd_xfer_friend_add_resp]
                      0-glusterd: Responded to bd-reactive-worker-1 (0),
                      ret: 0, op_ret: 0</div>
                    <div>[2016-12-26 13:46:58.501512] I [MSGID: 106492]
                      [glusterd-handler.c:2784:__glu<wbr>sterd_handle_friend_update]
                      0-glusterd: Received friend update from uuid:
                      59500674-750f-4e16-aeea-4a99fd<wbr>67218a</div>
                    <div>[2016-12-26 13:46:58.503348] I [MSGID: 106502]
                      [glusterd-handler.c:2829:__glu<wbr>sterd_handle_friend_update]
                      0-management: Received my uuid as Friend</div>
                    <div>[2016-12-26 13:46:58.509794] I [MSGID: 106493]
                      [glusterd-rpc-ops.c:691:__glus<wbr>terd_friend_update_cbk]
                      0-management: Received ACC from uuid:
                      59500674-750f-4e16-aeea-4a99fd<wbr>67218a</div>
                    <div>[2016-12-26 13:47:04.057563] I [MSGID: 106143]
                      [glusterd-pmap.c:227:pmap_regi<wbr>stry_bind]
                      0-pmap: adding brick
                      /srv/gluster/data/small/brick3 on port 49157</div>
                    <div>[2016-12-26 13:47:04.058477] I [MSGID: 106143]
                      [glusterd-pmap.c:227:pmap_regi<wbr>stry_bind]
                      0-pmap: adding brick
                      /srv/gluster/data/large/brick1 on port 49152</div>
                    <div>[2016-12-26 13:47:04.059496] I [MSGID: 106143]
                      [glusterd-pmap.c:227:pmap_regi<wbr>stry_bind]
                      0-pmap: adding brick
                      /srv/gluster/data/small/brick1 on port 49155</div>
                    <div>[2016-12-26 13:47:04.059546] I [MSGID: 106143]
                      [glusterd-pmap.c:227:pmap_regi<wbr>stry_bind]
                      0-pmap: adding brick
                      /srv/gluster/data/large/brick3 on port 49154</div>
                    <div>[2016-12-26 13:47:04.072431] I [MSGID: 106143]
                      [glusterd-pmap.c:227:pmap_regi<wbr>stry_bind]
                      0-pmap: adding brick
                      /srv/gluster/data/small/brick2 on port 49156</div>
                    <div>[2016-12-26 13:47:04.262372] I [MSGID: 106143]
                      [glusterd-pmap.c:227:pmap_regi<wbr>stry_bind]
                      0-pmap: adding brick
                      /srv/gluster/data/large/brick2 on port 49153</div>
                    <div>[2016-12-26 13:47:59.970037] I [MSGID: 106499]
                      [glusterd-handler.c:4349:__glu<wbr>sterd_handle_status_volume]
                      0-management: Received status volume req for
                      volume reactive_large</div>
                    <div>[2016-12-26 13:47:59.978405] I [MSGID: 106499]
                      [glusterd-handler.c:4349:__glu<wbr>sterd_handle_status_volume]
                      0-management: Received status volume req for
                      volume reactive_small</div>
                  </div>
                </i></div>
            </div>
          </blockquote>
        </div>
        <br>
      </div>
      <br>
      <fieldset class="m_-9100247780270237154m_8455932154570319364mimeAttachmentHeader"></fieldset>
      <br>
      </div></div><pre>______________________________<wbr>_________________
Gluster-users mailing list
<a class="m_-9100247780270237154m_8455932154570319364moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>
<a class="m_-9100247780270237154m_8455932154570319364moz-txt-link-freetext" href="http://www.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://www.gluster.org/mailman<wbr>/listinfo/gluster-users</a></pre>
    </blockquote>
    <p><br>
    </p>
  </div>

</blockquote></div><br></div>
</div></div></blockquote></div><br></div>