<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On 13 July 2015 at 19:19, Atin Mukherjee <span dir="ltr"><<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div class=""><div class="h5"><br>
<br>
On 07/13/2015 10:45 PM, Tiemen Ruiten wrote:<br>
> On 13 July 2015 at 19:06, Atin Mukherjee <<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>> wrote:<br>
><br>
>><br>
>><br>
>> On 07/13/2015 10:29 PM, Tiemen Ruiten wrote:<br>
>>> OK, I found what's wrong. From the brick's log:<br>
>>><br>
>>> [2015-07-12 02:32:01.542934] I [glusterfsd-mgmt.c:1512:mgmt_getspec_cbk]<br>
>>> 0-glusterfs: No change in volfile, continuing<br>
>>> [2015-07-13 14:21:06.722675] W [glusterfsd.c:1219:cleanup_and_exit] (--><br>
>>> 0-: received signum (15), shutting down<br>
>>> [2015-07-13 14:21:35.168750] I [MSGID: 100030] [glusterfsd.c:2294:main]<br>
>>> 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version<br>
>> 3.7.1<br>
>>> (args: /usr/sbin/glusterfsd -s 10.100.3.10 --volfile-id<br>
>>> vmimage.10.100.3.10.export-gluster01-brick -p<br>
>>> /var/lib/glusterd/vols/vmimage/run/10.100.3.10-export-gluster01-brick.pid<br>
>>> -S /var/run/gluster/2bfe3a2242d586d0850775f601f1c3ee.socket --brick-name<br>
>>> /export/gluster01/brick -l<br>
>>> /var/log/glusterfs/bricks/export-gluster01-brick.log --xlator-option<br>
>>> *-posix.glusterd-uuid=26186ec6-a8c7-4834-bcaa-24e30289dba3 --brick-port<br>
>>> 49153 --xlator-option vmimage-server.listen-port=49153)<br>
>>> [2015-07-13 14:21:35.178558] E [socket.c:823:__socket_server_bind]<br>
>>> 0-socket.glusterfsd: binding to failed: Address already in use<br>
>>> [2015-07-13 14:21:35.178624] E [socket.c:826:__socket_server_bind]<br>
>>> 0-socket.glusterfsd: Port is already in use<br>
>>> [2015-07-13 14:21:35.178649] W [rpcsvc.c:1602:rpcsvc_transport_create]<br>
>>> 0-rpc-service: listening on transport failed<br>
>>><br>
>>><br>
>>> ps aux | grep gluster<br>
>>> root 6417 0.0 0.2 753080 175016 ? Ssl May21 25:25<br>
>>> /usr/sbin/glusterfs --volfile-server=10.100.3.10 --volfile-id=/wwwdata<br>
>>> /mnt/gluster/web/wwwdata<br>
>>> root 6742 0.0 0.0 622012 17624 ? Ssl May21 22:31<br>
>>> /usr/sbin/glusterfs --volfile-server=10.100.3.10 --volfile-id=/conf<br>
>>> /mnt/gluster/conf<br>
>>> root 36575 0.2 0.0 589956 19228 ? Ssl 16:21 0:19<br>
>>> /usr/sbin/glusterd --pid-file=/run/glusterd.pid<br>
>>> root 36720 0.0 0.0 565140 55836 ? Ssl 16:21 0:02<br>
>>> /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p<br>
>>> /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S<br>
>>> /var/run/gluster/8b9ce8bebfa8c1d2fabb62654bdc550e.socket<br>
>>> root 36730 0.0 0.0 451016 22936 ? Ssl 16:21 0:01<br>
>>> /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p<br>
>>> /var/lib/glusterd/glustershd/run/glustershd.pid -l<br>
>>> /var/log/glusterfs/glustershd.log -S<br>
>>> /var/run/gluster/c0d7454986c96eef463d028dc8bce9fe.socket --xlator-option<br>
>>> *replicate*.node-uuid=26186ec6-a8c7-4834-bcaa-24e30289dba3<br>
>>> root 37398 0.0 0.0 103248 916 pts/2 S+ 18:49 0:00 grep<br>
>>> gluster<br>
>>> root 40058 0.0 0.0 755216 60212 ? Ssl May21 22:06<br>
>>> /usr/sbin/glusterfs --volfile-server=10.100.3.10 --volfile-id=/fl-webroot<br>
>>> /mnt/gluster/web/flash/webroot<br>
>>><br>
>>> So several leftover processes. What will happen if I do a<br>
>>><br>
>>> /etc/init.d/glusterd stop<br>
>>> /etc/init.d/glusterfsd stop<br>
>>><br>
>>> kill all remaining gluster processes and restart gluster on this node?<br>
>>><br>
>>> Will the volume stay online? What about split-brain? I suppose it would<br>
>> be<br>
>>> best to disconnect all clients first...?<br>
>> Can you double check if any brick process is already running, if so kill<br>
>> it and try 'gluster volume start <volname> force'<br>
>>><br>
>>><br>
>>> On 13 July 2015 at 18:25, Tiemen Ruiten <<a href="mailto:t.ruiten@rdmedia.com">t.ruiten@rdmedia.com</a>> wrote:<br>
>>><br>
>>>> Hello,<br>
>>>><br>
>>>> We have a two-node gluster cluster, running version 3.7.1, that hosts an<br>
>>>> oVirt storage domain. This afternoon I tried creating a template in<br>
>> oVirt,<br>
>>>> but within a minute VM's stopped responding and Gluster started<br>
>> generating<br>
>>>> errors like the following:<br>
>>>><br>
>>>> [2015-07-13 14:09:51.772629] W [rpcsvc.c:270:rpcsvc_program_actor]<br>
>>>> 0-rpc-service: RPC program not available (req 1298437 330) for<br>
>>>> <a href="http://10.100.3.40:1021" rel="noreferrer" target="_blank">10.100.3.40:1021</a><br>
>>>> [2015-07-13 14:09:51.772675] E<br>
>> [rpcsvc.c:565:rpcsvc_check_and_reply_error]<br>
>>>> 0-rpcsvc: rpc actor failed to complete successfully<br>
>>>><br>
>>>> I managed to get things in working order again by restarting glusterd<br>
>> and<br>
>>>> glusterfsd, but now one brick is down:<br>
>>>><br>
>>>> $sudo gluster volume status vmimage<br>
>>>> Status of volume: vmimage<br>
>>>> Gluster process TCP Port RDMA Port Online<br>
>>>> Pid<br>
>>>><br>
>>>><br>
>> ------------------------------------------------------------------------------<br>
>>>> Brick 10.100.3.10:/export/gluster01/brick N/A N/A N<br>
>>>> 36736<br>
>>>> Brick 10.100.3.11:/export/gluster01/brick 49153 0 Y<br>
>>>> 11897<br>
>>>> NFS Server on localhost 2049 0 Y<br>
>>>> 36720<br>
>>>> Self-heal Daemon on localhost N/A N/A Y<br>
>>>> 36730<br>
>>>> NFS Server on 10.100.3.11 2049 0 Y<br>
>>>> 11919<br>
>>>> Self-heal Daemon on 10.100.3.11 N/A N/A Y<br>
>>>> 11924<br>
>>>><br>
>>>> Task Status of Volume vmimage<br>
>>>><br>
>>>><br>
>> ------------------------------------------------------------------------------<br>
>>>> There are no active volume tasks<br>
>>>><br>
>>>> $ sudo gluster peer status<br>
>>>> Number of Peers: 1<br>
>>>><br>
>>>> Hostname: 10.100.3.11<br>
>>>> Uuid: f9872fea-47f5-41f6-8094-c9fabd3c1339<br>
>>>> State: Peer in Cluster (Connected)<br>
>>>><br>
>>>> Additionally in the etc-glusterfs-glusterd.vol.log I see these messages<br>
>>>> repeating every 3 seconds:<br>
>>>><br>
>>>> [2015-07-13 16:15:21.737044] W [socket.c:642:__socket_rwv] 0-management:<br>
>>>> readv on /var/run/gluster/2bfe3a2242d586d0850775f601f1c3ee.socket failed<br>
>>>> (Invalid argument)<br>
>>>> The message "I [MSGID: 106005]<br>
>>>> [glusterd-handler.c:4667:__glusterd_brick_rpc_notify] 0-management:<br>
>> Brick<br>
>>>> 10.100.3.10:/export/gluster01/brick has disconnected from glusterd."<br>
>>>> repeated 39 times between [2015-07-13 16:13:24.717611] and [2015-07-13<br>
>>>> 16:15:21.737862]<br>
>>>> [2015-07-13 16:15:24.737694] W [socket.c:642:__socket_rwv] 0-management:<br>
>>>> readv on /var/run/gluster/2bfe3a2242d586d0850775f601f1c3ee.socket failed<br>
>>>> (Invalid argument)<br>
>>>> [2015-07-13 16:15:24.738498] I [MSGID: 106005]<br>
>>>> [glusterd-handler.c:4667:__glusterd_brick_rpc_notify] 0-management:<br>
>> Brick<br>
>>>> 10.100.3.10:/export/gluster01/brick has disconnected from glusterd.<br>
>>>> [2015-07-13 16:15:27.738194] W [socket.c:642:__socket_rwv] 0-management:<br>
>>>> readv on /var/run/gluster/2bfe3a2242d586d0850775f601f1c3ee.socket failed<br>
>>>> (Invalid argument)<br>
>>>> [2015-07-13 16:15:30.738991] W [socket.c:642:__socket_rwv] 0-management:<br>
>>>> readv on /var/run/gluster/2bfe3a2242d586d0850775f601f1c3ee.socket failed<br>
>>>> (Invalid argument)<br>
>>>> [2015-07-13 16:15:33.739735] W [socket.c:642:__socket_rwv] 0-management:<br>
>>>> readv on /var/run/gluster/2bfe3a2242d586d0850775f601f1c3ee.socket failed<br>
>>>> (Invalid argument)<br>
>>>><br>
>>>> Can I get this brick back up without bringing the volume/cluster down?<br>
>>>><br>
>>>> --<br>
>>>> Tiemen Ruiten<br>
>>>> Systems Engineer<br>
>>>> R&D Media<br>
>>>><br>
>>><br>
>>><br>
>>><br>
>>><br>
>>><br>
>>> _______________________________________________<br>
>>> Gluster-users mailing list<br>
>>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>>> <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
>>><br>
>><br>
>> --<br>
>> ~Atin<br>
>><br>
><br>
><br>
> Hi Atin,<br>
><br>
> I see brick processes for volumes wwwdata, conf and fl-webroot, judging<br>
> from the ps aux | grep gluster output. These volumes are not started. No<br>
> brick process for vmimage. So you're saying, kill those brick processes,<br>
> then gluster volume start vmimage force?<br>
</div></div>No, I meant if any left over brick process were there for vmimage. If<br>
its there kill them and start the volume with force or you could<br>
probably try to stop the volume and then start it.<br>
<br>
~Atin<br>
<div class=""><div class="h5">><br>
> Thank you for your response.<br>
><br>
<br>
--<br>
</div></div><span class=""><font color="#888888">~Atin<br>
</font></span></blockquote></div><br>OK, there's no brick process for vmimage. Is it possible that any of the leftover brick processes for the other volumes is blocking the port?</div><div class="gmail_extra"><br></div><div class="gmail_extra">What is the best approach in my case? Disconnect clients, stop volume and restart?</div><div class="gmail_extra"><br>-- <br><div class="gmail_signature"><div dir="ltr">Tiemen Ruiten<br>Systems Engineer<br>R&D Media<br></div></div>
</div></div>