<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p>It seems like an actual bug, if you<tt> </tt>can file a bug in
bugzilla, that would be great. <br>
</p>
<p><br>
</p>
<p>At least I don't see workaround for this issue, may be till the
next update is available with fix, you can use either rdma alone
or tcp alone volume.</p>
<p>Let me know whether this is acceptable, if so I can give you the
steps to change the transport of an existing volume.</p>
<p><br>
</p>
<p>Regards</p>
<p>Rafi KC<br>
</p>
<br>
<div class="moz-cite-prefix">On 09/30/2016 10:35 AM, Mohammed Rafi K
C wrote:<br>
</div>
<blockquote
cite="mid:4aef7416-a6a9-4f2e-6141-392034d2a5c1@redhat.com"
type="cite">
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
<p><br>
</p>
<br>
<div class="moz-cite-prefix">On 09/30/2016 02:35 AM, Dennis
Michael wrote:<br>
</div>
<blockquote
cite="mid:CA+pUh1MxvCCe3JM9tr=LkoRf4uLq_RPyPpV-A_+E5qTj4YAx=w@mail.gmail.com"
type="cite">
<div dir="ltr"><br>
<div>Are there any workarounds to this? RDMA is configured on
my servers.</div>
</div>
</blockquote>
<br>
<br>
By this, I assume your rdma setup/configuration over IPoIB is
working fine.<br>
<br>
Can you tell us what machine you are using and whether SELinux is
configured on the machine or not.<br>
<br>
Also I couldn't see any logs attached here.<br>
<br>
Rafi KC<br>
<br>
<br>
<blockquote
cite="mid:CA+pUh1MxvCCe3JM9tr=LkoRf4uLq_RPyPpV-A_+E5qTj4YAx=w@mail.gmail.com"
type="cite">
<div dir="ltr">
<div><br>
</div>
<div>Dennis</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Thu, Sep 29, 2016 at 7:19 AM, Atin
Mukherjee <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">
<div>
<div>
<div>
<div>Dennis,<br>
<br>
</div>
Thanks for sharing the logs.<br>
<br>
</div>
It seems like a volume configured created with
tcp,rdma transport fails to start (atleast in my
local set up). The issue here is although the brick
process comes up, but glusterd receives a non zero
ret code from the runner interface which spawns the
brick process(es). <br>
<br>
</div>
Raghavendra Talur/Rafi,<br>
<br>
</div>
Is this an intended behaviour if rdma device is not
configured? Please chime in with your thoughts<br>
<div><br>
<div class="gmail_extra">
<div>
<div class="h5"><br>
<div class="gmail_quote">On Wed, Sep 28, 2016 at
10:22 AM, Atin Mukherjee <span dir="ltr"><<a
moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:amukherj@redhat.com"><a class="moz-txt-link-abbreviated" href="mailto:amukherj@redhat.com">amukherj@redhat.com</a></a>></span>
wrote:<br>
<blockquote class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px
#ccc solid;padding-left:1ex">
<div dir="ltr">
<div>Dennis,<br>
<br>
</div>
It seems like that add-brick has
definitely failed and the entry is not
committed into glusterd store. volume
status and volume info commands are
referring the in-memory data for fs4
(which exist) but post a restart they are
no longer available. Could you run
glusterd with debug log enabled (systemctl
stop glusterd; glusterd -LDEBUG) and
provide us cmd_history.log, glusterd log
along with fs4 brick log files to further
analyze the issue? Regarding the missing
RDMA ports for fs2, fs3 brick can you
cross check if glusterfs-rdma package is
installed on both the nodes?<br>
</div>
<div class="gmail_extra">
<div>
<div><br>
<div class="gmail_quote">On Wed, Sep
28, 2016 at 7:14 AM, Ravishankar N <span
dir="ltr"><<a
moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:ravishankar@redhat.com"><a class="moz-txt-link-abbreviated" href="mailto:ravishankar@redhat.com">ravishankar@redhat.com</a></a>></span>
wrote:<br>
<blockquote class="gmail_quote"
style="margin:0 0 0
.8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div bgcolor="#FFFFFF"
text="#000000">
<div>
<div>
<div>On 09/27/2016 10:29 PM,
Dennis Michael wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div><br>
</div>
<div><br>
</div>
<div>[root@fs4 bricks]#
gluster volume info</div>
<div> </div>
<div>Volume Name:
cees-data</div>
<div>Type: Distribute</div>
<div>Volume ID:
27d2a59c-bdac-4f66-bcd8-e6124e<wbr>53a4a2</div>
<div>Status: Started</div>
<div>Number of Bricks: 4</div>
<div>Transport-type:
tcp,rdma</div>
<div>Bricks:</div>
<div>Brick1:
fs1:/data/brick</div>
<div>Brick2:
fs2:/data/brick</div>
<div>Brick3:
fs3:/data/brick</div>
<div>Brick4:
fs4:/data/brick</div>
<div>Options
Reconfigured:</div>
<div>features.quota-deem-statfs:
on</div>
<div>features.inode-quota:
on</div>
<div>features.quota: on</div>
<div>performance.readdir-ahead:
on</div>
<div>[root@fs4 bricks]#
gluster volume status</div>
<div>Status of volume:
cees-data</div>
<div>Gluster process
TCP Port RDMA Port
Online Pid</div>
<div>------------------------------<wbr>------------------------------<wbr>------------------</div>
<div>Brick
fs1:/data/brick
49152
49153 Y
1878 </div>
<div>Brick
fs2:/data/brick
49152
0 Y
1707 </div>
<div>Brick
fs3:/data/brick
49152
0 Y
4696 </div>
<div>Brick
fs4:/data/brick
N/A
N/A N
N/A </div>
<div>NFS Server on
localhost
2049 0
Y 13808</div>
<div>Quota Daemon on
localhost
N/A N/A
Y 13813</div>
<div>NFS Server on fs1
2049 0
Y 6722 </div>
<div>Quota Daemon on fs1
N/A N/A
Y 6730 </div>
<div>NFS Server on fs3
2049 0
Y 12553</div>
<div>Quota Daemon on fs3
N/A N/A
Y 12561</div>
<div>NFS Server on fs2
2049 0
Y 11702</div>
<div>Quota Daemon on fs2
N/A N/A
Y 11710</div>
<div> </div>
<div>Task Status of
Volume cees-data</div>
<div>------------------------------<wbr>------------------------------<wbr>------------------</div>
<div>There are no active
volume tasks</div>
<div> </div>
<div>[root@fs4 bricks]#
ps auxww | grep
gluster</div>
<div>root 13791 0.0
0.0 701472 19768 ?
Ssl 09:06 0:00
/usr/sbin/glusterd -p
/var/run/glusterd.pid
--log-level INFO</div>
<div>root 13808 0.0
0.0 560236 41420 ?
Ssl 09:07 0:00
/usr/sbin/glusterfs -s
localhost --volfile-id
gluster/nfs -p
/var/lib/glusterd/nfs/run/nfs.<wbr>pid
-l
/var/log/glusterfs/nfs.log
-S
/var/run/gluster/01c6152337436<wbr>9658a62b75c582b5ac2.socket</div>
<div>root 13813 0.0
0.0 443164 17908 ?
Ssl 09:07 0:00
/usr/sbin/glusterfs -s
localhost --volfile-id
gluster/quotad -p
/var/lib/glusterd/quotad/run/q<wbr>uotad.pid
-l
/var/log/glusterfs/quotad.log
-S
/var/run/gluster/3753def90f5c3<wbr>4f656513dba6a544f7d.socket
--xlator-option
*replicate*.data-self-heal=off
--xlator-option
*replicate*.metadata-self-heal<wbr>=off
--xlator-option
*replicate*.entry-self-heal=of<wbr>f</div>
<div>root 13874 0.0
0.0 1200472 31700 ?
Ssl 09:16 0:00
/usr/sbin/glusterfsd
-s fs4 --volfile-id
cees-data.fs4.data-brick
-p
/var/lib/glusterd/vols/cees-da<wbr>ta/run/fs4-data-brick.pid
-S
/var/run/gluster/5203ab38be21e<wbr>1d37c04f6bdfee77d4a.socket
--brick-name
/data/brick -l
/var/log/glusterfs/bricks/data<wbr>-brick.log
--xlator-option
*-posix.glusterd-uuid=f04b231e<wbr>-63f8-4374-91ae-17c0c623f165
--brick-port 49152
49153 --xlator-option
cees-data-server.transport.rdm<wbr>a.listen-port=49153 --xlator-option
cees-data-server.listen-port=4<wbr>9152
--volfile-server-transport=soc<wbr>ket,rdma</div>
<div>root 13941 0.0
0.0 112648 976
pts/0 S+ 09:50
0:00 grep --color=auto
gluster</div>
<div><br>
</div>
<div>[root@fs4 bricks]#
systemctl restart
glusterfsd glusterd</div>
<div><br>
</div>
<div>[root@fs4 bricks]#
ps auxww | grep
gluster</div>
<div>root 13808 0.0
0.0 560236 41420 ?
Ssl 09:07 0:00
/usr/sbin/glusterfs -s
localhost --volfile-id
gluster/nfs -p
/var/lib/glusterd/nfs/run/nfs.<wbr>pid
-l
/var/log/glusterfs/nfs.log
-S
/var/run/gluster/01c6152337436<wbr>9658a62b75c582b5ac2.socket</div>
<div>root 13813 0.0
0.0 443164 17908 ?
Ssl 09:07 0:00
/usr/sbin/glusterfs -s
localhost --volfile-id
gluster/quotad -p
/var/lib/glusterd/quotad/run/q<wbr>uotad.pid
-l
/var/log/glusterfs/quotad.log
-S
/var/run/gluster/3753def90f5c3<wbr>4f656513dba6a544f7d.socket
--xlator-option
*replicate*.data-self-heal=off
--xlator-option
*replicate*.metadata-self-heal<wbr>=off
--xlator-option
*replicate*.entry-self-heal=of<wbr>f</div>
<div>root 13953 0.1
0.0 570740 14988 ?
Ssl 09:51 0:00
/usr/sbin/glusterd -p
/var/run/glusterd.pid
--log-level INFO</div>
<div>root 13965 0.0
0.0 112648 976
pts/0 S+ 09:51
0:00 grep --color=auto
gluster</div>
<div><br>
</div>
<div>[root@fs4 bricks]#
gluster volume info</div>
<div> </div>
<div>Volume Name:
cees-data</div>
<div>Type: Distribute</div>
<div>Volume ID:
27d2a59c-bdac-4f66-bcd8-e6124e<wbr>53a4a2</div>
<div>Status: Started</div>
<div>Number of Bricks: 3</div>
<div>Transport-type:
tcp,rdma</div>
<div>Bricks:</div>
<div>Brick1:
fs1:/data/brick</div>
<div>Brick2:
fs2:/data/brick</div>
<div>Brick3:
fs3:/data/brick</div>
<div>Options
Reconfigured:</div>
<div>performance.readdir-ahead:
on</div>
<div>features.quota: on</div>
<div>features.inode-quota:
on</div>
<div>features.quota-deem-statfs:
on</div>
</div>
</blockquote>
<br>
<br>
</div>
</div>
I'm not sure what's going on
here. Restarting glusterd seems
to change the output of gluster
volume info? I also see you are
using RDMA. Not sure why the
RDMA ports for fs2 and fs3 are
not shown in the volume status
output. CC'ing some
glusterd/rdma devs for pointers.<br>
<br>
-Ravi
<div>
<div><br>
<br>
<br>
<blockquote type="cite">
<div dir="ltr">
<div>[root@fs4 bricks]#
gluster volume status</div>
<div>Status of volume:
cees-data</div>
<div>Gluster process
TCP Port RDMA Port
Online Pid</div>
<div>------------------------------<wbr>------------------------------<wbr>------------------</div>
<div>Brick
fs1:/data/brick
49152
49153 Y
1878 </div>
<div>Brick
fs2:/data/brick
49152
0 Y
1707 </div>
<div>Brick
fs3:/data/brick
49152
0 Y
4696 </div>
<div>NFS Server on
localhost
2049 0
Y 13968</div>
<div>Quota Daemon on
localhost
N/A N/A
Y 13976</div>
<div>NFS Server on fs2
2049 0
Y 11702</div>
<div>Quota Daemon on fs2
N/A N/A
Y 11710</div>
<div>NFS Server on fs3
2049 0
Y 12553</div>
<div>Quota Daemon on fs3
N/A N/A
Y 12561</div>
<div>NFS Server on fs1
2049 0
Y 6722 </div>
<div> </div>
<div>Task Status of
Volume cees-data</div>
<div>------------------------------<wbr>------------------------------<wbr>------------------</div>
<div>There are no active
volume tasks</div>
<div><br>
</div>
<div>
<div>[root@fs4
bricks]# gluster
peer status</div>
<div>Number of Peers:
3</div>
<div><br>
</div>
<div>Hostname: fs1</div>
<div>Uuid:
ddc0a23e-05e5-48f7-993e-a37e43<wbr>b21605</div>
<div>State: Peer in
Cluster (Connected)</div>
<div><br>
</div>
<div>Hostname: fs2</div>
<div>Uuid:
e37108f8-d2f1-4f28-adc8-0b3d34<wbr>01df29</div>
<div>State: Peer in
Cluster (Connected)</div>
<div><br>
</div>
<div>Hostname: fs3</div>
<div>Uuid:
19a42201-c932-44db-b1a7-8b5b1a<wbr>f32a36</div>
<div>State: Peer in
Cluster (Connected)</div>
</div>
<div><br>
</div>
<div>Dennis</div>
<div><br>
</div>
<div class="gmail_extra"><br>
<div
class="gmail_quote">On
Tue, Sep 27, 2016 at
9:40 AM, Ravishankar
N <span dir="ltr"><<a
moz-do-not-send="true" class="moz-txt-link-abbreviated"
href="mailto:ravishankar@redhat.com"><a class="moz-txt-link-abbreviated" href="mailto:ravishankar@redhat.com">ravishankar@redhat.com</a></a>></span>
wrote:<br>
<blockquote
class="gmail_quote"
style="margin:0px
0px 0px
0.8ex;border-left:1px
solid
rgb(204,204,204);padding-left:1ex">
<div
bgcolor="#FFFFFF">
<div>
<div>
<div>On
09/27/2016
09:53 PM,
Dennis Michael
wrote:<br>
</div>
<blockquote
type="cite">
<div dir="ltr">Yes,
you are
right. I
mixed up the
logs. I just
ran the
add-brick
command again
after cleaning
up fs4 and
re-installing
gluster. This
is the
complete fs4
data-brick.log.
<div><br>
</div>
<div>
<div>[root@fs1
~]# gluster
volume
add-brick
cees-data
fs4:/data/brick</div>
<div>volume
add-brick:
failed: Commit
failed on fs4.
Please check
log file for
details.</div>
<div><br>
</div>
<div>
<div>[root@fs4
bricks]# pwd</div>
<div>/var/log/glusterfs/bricks</div>
<div>[root@fs4
bricks]# cat
data-brick.log </div>
<div>[2016-09-27
16:16:28.095661] I [MSGID: 100030] [glusterfsd.c:2338:main]
0-/usr/sbin/glusterfsd:
Started
running
/usr/sbin/glusterfsd
version 3.7.14
(args:
/usr/sbin/glusterfsd
-s fs4
--volfile-id
cees-data.fs4.data-brick
-p
/var/lib/glusterd/vols/cees-da<wbr>ta/run/fs4-data-brick.pid
-S
/var/run/gluster/5203ab38be21e<wbr>1d37c04f6bdfee77d4a.socket
--brick-name
/data/brick -l
/var/log/glusterfs/bricks/data<wbr>-brick.log --xlator-option
*-posix.glusterd-uuid=f04b231e<wbr>-63f8-4374-91ae-17c0c623f165
--brick-port
49152
--xlator-option
cees-data-server.transport.rdm<wbr>a.listen-port=49153 --xlator-option
cees-data-server.listen-port=4<wbr>9152
--volfile-server-transport=soc<wbr>ket,rdma)</div>
<div>[2016-09-27
16:16:28.101547] I [MSGID: 101190] [event-epoll.c:632:event_dispa<wbr>tch_epoll_worker]
0-epoll:
Started thread
with index 1</div>
<div>[2016-09-27
16:16:28.104637] I [graph.c:269:gf_add_cmdline_op<wbr>tions]
0-cees-data-server:
adding option
'listen-port'
for volume
'cees-data-server'
with value
'49152'</div>
<div>[2016-09-27
16:16:28.104646] I [graph.c:269:gf_add_cmdline_op<wbr>tions]
0-cees-data-server:
adding option
'transport.rdma.listen-port' for volume 'cees-data-server' with value
'49153'</div>
<div>[2016-09-27
16:16:28.104662] I [graph.c:269:gf_add_cmdline_op<wbr>tions]
0-cees-data-posix:
adding option
'glusterd-uuid' for volume 'cees-data-posix' with value
'f04b231e-63f8-4374-91ae-17c0c<wbr>623f165'</div>
<div>[2016-09-27
16:16:28.104808] I [MSGID: 115034] [server.c:403:_check_for_auth_<wbr>option]
0-/data/brick:
skip format
check for
non-addr auth
option
auth.login./data/brick.allow</div>
<div>[2016-09-27
16:16:28.104814] I [MSGID: 115034] [server.c:403:_check_for_auth_<wbr>option]
0-/data/brick:
skip format
check for
non-addr auth
option
auth.login.18ddaf4c-ad98-4155-<wbr>9372-717eae718b4c.password</div>
<div>[2016-09-27
16:16:28.104883] I [MSGID: 101190] [event-epoll.c:632:event_dispa<wbr>tch_epoll_worker]
0-epoll:
Started thread
with index 2</div>
<div>[2016-09-27
16:16:28.105479] I [rpcsvc.c:2196:rpcsvc_set_outs<wbr>tanding_rpc_limit]
0-rpc-service:
Configured
rpc.outstanding-rpc-limit
with value 64</div>
<div>[2016-09-27
16:16:28.105532] W [MSGID: 101002] [options.c:957:xl_opt_validate<wbr>]
0-cees-data-server: option 'listen-port' is deprecated, preferred is
'transport.socket.listen-port'<wbr>,
continuing
with
correction</div>
<div>[2016-09-27
16:16:28.109456] W [socket.c:3665:reconfigure] 0-cees-data-quota: NBIO
on -1 failed
(Bad file
descriptor)</div>
<div>[2016-09-27
16:16:28.489255] I [MSGID: 121050] [ctr-helper.c:259:extract_ctr_<wbr>options]
0-gfdbdatastore: CTR Xlator is disabled.</div>
<div>[2016-09-27
16:16:28.489272] W [MSGID: 101105] [gfdb_sqlite3.h:239:gfdb_set_s<wbr>ql_params]
0-cees-data-changetimerecorder<wbr>: Failed to retrieve sql-db-pagesize
from
params.Assigning
default value:
4096</div>
<div>[2016-09-27
16:16:28.489278] W [MSGID: 101105] [gfdb_sqlite3.h:239:gfdb_set_s<wbr>ql_params]
0-cees-data-changetimerecorder<wbr>: Failed to retrieve
sql-db-journalmode
from
params.Assigning
default value:
wal</div>
<div>[2016-09-27
16:16:28.489284] W [MSGID: 101105] [gfdb_sqlite3.h:239:gfdb_set_s<wbr>ql_params]
0-cees-data-changetimerecorder<wbr>: Failed to retrieve sql-db-sync from
params.Assigning default value: off</div>
<div>[2016-09-27
16:16:28.489288] W [MSGID: 101105] [gfdb_sqlite3.h:239:gfdb_set_s<wbr>ql_params]
0-cees-data-changetimerecorder<wbr>: Failed to retrieve
sql-db-autovacuum
from
params.Assigning
default value:
none</div>
<div>[2016-09-27
16:16:28.490431] I [trash.c:2412:init] 0-cees-data-trash: no option
specified for
'eliminate',
using NULL</div>
<div>[2016-09-27
16:16:28.672814] W [graph.c:357:_log_if_unknown_o<wbr>ption]
0-cees-data-server:
option
'rpc-auth.auth-glusterfs'
is not
recognized</div>
<div>[2016-09-27
16:16:28.672854] W [graph.c:357:_log_if_unknown_o<wbr>ption]
0-cees-data-server:
option
'rpc-auth.auth-unix'
is not
recognized</div>
<div>[2016-09-27
16:16:28.672872] W [graph.c:357:_log_if_unknown_o<wbr>ption]
0-cees-data-server:
option
'rpc-auth.auth-null'
is not
recognized</div>
<div>[2016-09-27
16:16:28.672924] W [graph.c:357:_log_if_unknown_o<wbr>ption]
0-cees-data-quota:
option
'timeout' is
not recognized</div>
<div>[2016-09-27
16:16:28.672955] W [graph.c:357:_log_if_unknown_o<wbr>ption]
0-cees-data-trash:
option
'brick-path'
is not
recognized</div>
<div>Final
graph:</div>
<div>+-----------------------------<wbr>------------------------------<wbr>-------------------+</div>
<div> 1:
volume
cees-data-posix</div>
<div> 2:
type
storage/posix</div>
<div> 3:
option
glusterd-uuid
f04b231e-63f8-4374-91ae-17c0c6<wbr>23f165</div>
<div> 4:
option
directory
/data/brick</div>
<div> 5:
option
volume-id
27d2a59c-bdac-4f66-bcd8-e6124e<wbr>53a4a2</div>
<div> 6:
option
update-link-count-parent
on</div>
<div> 7:
end-volume</div>
<div> 8: </div>
<div> 9:
volume
cees-data-trash</div>
<div> 10:
type
features/trash</div>
<div> 11:
option
trash-dir
.trashcan</div>
<div> 12:
option
brick-path
/data/brick</div>
<div> 13:
option
trash-internal-op
off</div>
<div> 14:
subvolumes
cees-data-posix</div>
<div> 15:
end-volume</div>
<div> 16: </div>
<div> 17:
volume
cees-data-changetimerecorder</div>
<div> 18:
type
features/changetimerecorder</div>
<div> 19:
option db-type
sqlite3</div>
<div> 20:
option
hot-brick off</div>
<div> 21:
option db-name
brick.db</div>
<div> 22:
option db-path
/data/brick/.glusterfs/</div>
<div> 23:
option
record-exit
off</div>
<div> 24:
option
ctr_link_consistency
off</div>
<div> 25:
option
ctr_lookupheal_link_timeout
300</div>
<div> 26:
option
ctr_lookupheal_inode_timeout
300</div>
<div> 27:
option
record-entry
on</div>
<div> 28:
option
ctr-enabled
off</div>
<div> 29:
option
record-counters
off</div>
<div> 30:
option
ctr-record-metadata-heat
off</div>
<div> 31:
option
sql-db-cachesize
1000</div>
<div> 32:
option
sql-db-wal-autocheckpoint
1000</div>
<div> 33:
subvolumes
cees-data-trash</div>
<div> 34:
end-volume</div>
<div> 35: </div>
<div> 36:
volume
cees-data-changelog</div>
<div> 37:
type
features/changelog</div>
<div> 38:
option
changelog-brick
/data/brick</div>
<div> 39:
option
changelog-dir
/data/brick/.glusterfs/changel<wbr>ogs</div>
<div> 40:
option
changelog-barrier-timeout
120</div>
<div> 41:
subvolumes
cees-data-changetimerecorder</div>
<div> 42:
end-volume</div>
<div> 43: </div>
<div> 44:
volume
cees-data-bitrot-stub</div>
<div> 45:
type
features/bitrot-stub</div>
<div> 46:
option export
/data/brick</div>
<div> 47:
subvolumes
cees-data-changelog</div>
<div> 48:
end-volume</div>
<div> 49: </div>
<div> 50:
volume
cees-data-access-control</div>
<div> 51:
type
features/access-control</div>
<div> 52:
subvolumes
cees-data-bitrot-stub</div>
<div> 53:
end-volume</div>
<div> 54: </div>
<div> 55:
volume
cees-data-locks</div>
<div> 56:
type
features/locks</div>
<div> 57:
subvolumes
cees-data-access-control</div>
<div> 58:
end-volume</div>
<div> 59: </div>
<div> 60:
volume
cees-data-upcall</div>
<div> 61:
type
features/upcall</div>
<div> 62:
option
cache-invalidation
off</div>
<div> 63:
subvolumes
cees-data-locks</div>
<div> 64:
end-volume</div>
<div> 65: </div>
<div> 66:
volume
cees-data-io-threads</div>
<div> 67:
type
performance/io-threads</div>
<div> 68:
subvolumes
cees-data-upcall</div>
<div> 69:
end-volume</div>
<div> 70: </div>
<div> 71:
volume
cees-data-marker</div>
<div> 72:
type
features/marker</div>
<div> 73:
option
volume-uuid
27d2a59c-bdac-4f66-bcd8-e6124e<wbr>53a4a2</div>
<div> 74:
option
timestamp-file
/var/lib/glusterd/vols/cees-da<wbr>ta/marker.tstamp</div>
<div> 75:
option
quota-version
1</div>
<div> 76:
option xtime
off</div>
<div> 77:
option
gsync-force-xtime
off</div>
<div> 78:
option quota
on</div>
<div> 79:
option
inode-quota on</div>
<div> 80:
subvolumes
cees-data-io-threads</div>
<div> 81:
end-volume</div>
<div> 82: </div>
<div> 83:
volume
cees-data-barrier</div>
<div> 84:
type
features/barrier</div>
<div> 85:
option barrier
disable</div>
<div> 86:
option
barrier-timeout
120</div>
<div> 87:
subvolumes
cees-data-marker</div>
<div> 88:
end-volume</div>
<div> 89: </div>
<div> 90:
volume
cees-data-index</div>
<div> 91:
type
features/index</div>
<div> 92:
option
index-base
/data/brick/.glusterfs/indices</div>
<div> 93:
subvolumes
cees-data-barrier</div>
<div> 94:
end-volume</div>
<div> 95: </div>
<div> 96:
volume
cees-data-quota</div>
<div> 97:
type
features/quota</div>
<div> 98:
option
transport.socket.connect-path
/var/run/gluster/quotad.socket</div>
<div> 99:
option
transport-type
socket</div>
<div>100:
option
transport.address-family
unix</div>
<div>101:
option
volume-uuid
cees-data</div>
<div>102:
option
server-quota
on</div>
<div>103:
option timeout
0</div>
<div>104:
option
deem-statfs on</div>
<div>105:
subvolumes
cees-data-index</div>
<div>106:
end-volume</div>
<div>107: </div>
<div>108:
volume
cees-data-worm</div>
<div>109:
type
features/worm</div>
<div>110:
option worm
off</div>
<div>111:
subvolumes
cees-data-quota</div>
<div>112:
end-volume</div>
<div>113: </div>
<div>114:
volume
cees-data-read-only</div>
<div>115:
type
features/read-only</div>
<div>116:
option
read-only off</div>
<div>117:
subvolumes
cees-data-worm</div>
<div>118:
end-volume</div>
<div>119: </div>
<div>120:
volume
/data/brick</div>
<div>121:
type
debug/io-stats</div>
<div>122:
option
log-level INFO</div>
<div>123:
option
latency-measurement
off</div>
<div>124:
option
count-fop-hits
off</div>
<div>125:
subvolumes
cees-data-read-only</div>
<div>126:
end-volume</div>
<div>127: </div>
<div>128:
volume
cees-data-server</div>
<div>129:
type
protocol/server</div>
<div>130:
option
transport.socket.listen-port
49152</div>
<div>131:
option
rpc-auth.auth-glusterfs
on</div>
<div>132:
option
rpc-auth.auth-unix
on</div>
<div>133:
option
rpc-auth.auth-null
on</div>
<div>134:
option
rpc-auth-allow-insecure
on</div>
<div>135:
option
transport.rdma.listen-port
49153</div>
<div>136:
option
transport-type
tcp,rdma</div>
<div>137:
option
auth.login./data/brick.allow
18ddaf4c-ad98-4155-9372-717eae<wbr>718b4c</div>
<div>138:
option
auth.login.18ddaf4c-ad98-4155-<wbr>9372-717eae718b4c.password
9e913e92-7de0-47f9-94ed-d08cbb<wbr>130d23</div>
<div>139:
option
auth.addr./data/brick.allow
*</div>
<div>140:
subvolumes
/data/brick</div>
<div>141:
end-volume</div>
<div>142: </div>
<div>+-----------------------------<wbr>------------------------------<wbr>-------------------+</div>
<div>[2016-09-27
16:16:30.079541] I [login.c:81:gf_auth] 0-auth/login: allowed user
names:
18ddaf4c-ad98-4155-9372-717eae<wbr>718b4c</div>
<div>[2016-09-27
16:16:30.079567] I [MSGID: 115029] [server-handshake.c:690:server<wbr>_setvolume]
0-cees-data-server: accepted client from fs3-12560-2016/09/27-16:16:30:<wbr>47674-cees-data-client-3-0-0
(version:
3.7.14)</div>
<div>[2016-09-27
16:16:30.081487] I [login.c:81:gf_auth] 0-auth/login: allowed user
names:
18ddaf4c-ad98-4155-9372-717eae<wbr>718b4c</div>
<div>[2016-09-27
16:16:30.081505] I [MSGID: 115029] [server-handshake.c:690:server<wbr>_setvolume]
0-cees-data-server: accepted client from fs2-11709-2016/09/27-16:16:30:<wbr>50047-cees-data-client-3-0-0
(version:
3.7.14)</div>
<div>[2016-09-27
16:16:30.111091] I [login.c:81:gf_auth] 0-auth/login: allowed user
names:
18ddaf4c-ad98-4155-9372-717eae<wbr>718b4c</div>
<div>[2016-09-27
16:16:30.111113] I [MSGID: 115029] [server-handshake.c:690:server<wbr>_setvolume]
0-cees-data-server: accepted client from fs2-11701-2016/09/27-16:16:29:<wbr>24060-cees-data-client-3-0-0
(version:
3.7.14)</div>
<div>[2016-09-27
16:16:30.112822] I [login.c:81:gf_auth] 0-auth/login: allowed user
names:
18ddaf4c-ad98-4155-9372-717eae<wbr>718b4c</div>
<div>[2016-09-27
16:16:30.112836] I [MSGID: 115029] [server-handshake.c:690:server<wbr>_setvolume]
0-cees-data-server: accepted client from fs3-12552-2016/09/27-16:16:29:<wbr>23041-cees-data-client-3-0-0
(version:
3.7.14)</div>
<div>[2016-09-27
16:16:31.950978] I [login.c:81:gf_auth] 0-auth/login: allowed user
names:
18ddaf4c-ad98-4155-9372-717eae<wbr>718b4c</div>
<div>[2016-09-27
16:16:31.950998] I [MSGID: 115029] [server-handshake.c:690:server<wbr>_setvolume]
0-cees-data-server: accepted client from fs1-6721-2016/09/27-16:16:26:9<wbr>39991-cees-data-client-3-0-0
(version:
3.7.14)</div>
<div>[2016-09-27
16:16:31.981977] I [login.c:81:gf_auth] 0-auth/login: allowed user
names:
18ddaf4c-ad98-4155-9372-717eae<wbr>718b4c</div>
<div>[2016-09-27
16:16:31.981994] I [MSGID: 115029] [server-handshake.c:690:server<wbr>_setvolume]
0-cees-data-server: accepted client from fs1-6729-2016/09/27-16:16:27:9<wbr>71228-cees-data-client-3-0-0
(version:
3.7.14)</div>
</div>
</div>
</div>
<div
class="gmail_extra"><br>
</div>
</blockquote>
<br>
</div>
</div>
Hmm, this shows
the brick has
started. <br>
Does gluster
volume info on
fs4 shows all 4
bricks? (I guess
it does based on
your first
email).<br>
Does gluster
volume status on
fs4 (or ps
aux|grep
glusterfsd) show
the brick as
running? <br>
Does gluster
peer status on
all nodes list
the other 3
nodes as
connected? <br>
<br>
If yes, you
could try
`service
glusterd
restart` on fs4
and see if if
brings up the
brick? I'm just
shooting in the
dark here for
possible clues.<br>
-Ravi<span><br>
<br>
<blockquote
type="cite">
<div
class="gmail_extra">
<div
class="gmail_quote">On
Tue, Sep 27,
2016 at 8:46
AM,
Ravishankar N
<span
dir="ltr"><<a
moz-do-not-send="true" class="moz-txt-link-abbreviated"
href="mailto:ravishankar@redhat.com"><a class="moz-txt-link-abbreviated" href="mailto:ravishankar@redhat.com">ravishankar@redhat.com</a></a>></span>
wrote:<br>
<blockquote
class="gmail_quote"
style="margin:0px 0px 0px 0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div
bgcolor="#FFFFFF"><span>
<div>On
09/27/2016
09:06 PM,
Dennis Michael
wrote:<br>
</div>
<blockquote
type="cite">
<div dir="ltr">Yes,
the brick log
/var/log/glusterfs/bricks/data<wbr>-brick.log is created on fs4, and the
snippets
showing the
errors were
from that log.<br>
<br>
</div>
</blockquote>
</span> Unless
I'm missing
something, the
snippet below
is from
glusterd's log
and not the
brick's as is
evident from
the function
names.<br>
-Ravi<span><br>
<blockquote
type="cite">
<div dir="ltr">
Dennis <br>
</div>
<div
class="gmail_extra"><br>
<div
class="gmail_quote">On
Mon, Sep 26,
2016 at 5:58
PM,
Ravishankar N
<span
dir="ltr"><<a
moz-do-not-send="true" class="moz-txt-link-abbreviated"
href="mailto:ravishankar@redhat.com"><a class="moz-txt-link-abbreviated" href="mailto:ravishankar@redhat.com">ravishankar@redhat.com</a></a>></span>
wrote:<br>
<blockquote
class="gmail_quote"
style="margin:0px 0px 0px 0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex"><span>On
09/27/2016
05:25 AM,
Dennis Michael
wrote:<br>
<blockquote
class="gmail_quote"
style="margin:0px 0px 0px 0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
[2016-09-26
22:44:39.254921]
E [MSGID:
106005]
[glusterd-utils.c:4771:gluster<wbr>d_brick_start]
0-management:
Unable to
start brick
fs4:/data/brick<br>
[2016-09-26
22:44:39.254949]
E [MSGID:
106074]
[glusterd-brick-ops.c:2372:glu<wbr>sterd_op_add_brick]
0-glusterd:
Unable to add
bricks<br>
</blockquote>
<br>
</span> Is the
brick log
created on
fs4? Does it
contain
warnings/errors?<br>
<br>
-Ravi<br>
<br>
</blockquote>
</div>
<br>
</div>
</blockquote>
<p><br>
</p>
</span></div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<p><br>
</p>
</span></div>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote>
<p><br>
</p>
</div>
</div>
</div>
</blockquote>
</div>
<br>
<br clear="all">
<br>
</div>
</div>
<span><font color="#888888">-- <br>
<div data-smartmail="gmail_signature">
<div dir="ltr">
<div><br>
</div>
--Atin<br>
</div>
</div>
</font></span></div>
</blockquote>
</div>
<br>
<br clear="all">
<br>
</div>
</div>
<span class="HOEnZb"><font color="#888888">-- <br>
<div data-smartmail="gmail_signature">
<div dir="ltr">
<div><br>
</div>
--Atin<br>
</div>
</div>
</font></span></div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</blockquote>
<br>
</body>
</html>