<div dir="ltr">So the origin of all your problems is basically because of glusterp3 node in rejected state. You should be able to see an error log in glusterp1 & glusterp2 about why this peer has been rejected during handshaking. If you can point me to that log entry, probably that should give us a clue what has gone wrong and based on that I can help you with a workaround.<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Oct 14, 2016 at 9:32 AM, Lindsay Mathieson <span dir="ltr"><<a href="mailto:lindsay.mathieson@gmail.com" target="_blank">lindsay.mathieson@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Maybe remove peer glusterp3 via "gluster peer detach" then re add it?<br>
<div><div class="h5"><br>
On 14 October 2016 at 12:16, Thing <<a href="mailto:thing.thing@gmail.com">thing.thing@gmail.com</a>> wrote:<br>
> I seem to have a broken volume on glusterp3 which I odnt seem to be able to<br>
> fix, how to please?<br>
><br>
> ========<br>
> [root@glusterp1 /]# ls -l /data1<br>
> total 4<br>
> -rw-r--r--. 2 root root 0 Dec 14 2015 file1<br>
> -rw-r--r--. 2 root root 0 Dec 14 2015 file2<br>
> -rw-r--r--. 2 root root 0 Dec 14 2015 file3<br>
> -rw-r--r--. 2 root root 0 Dec 14 2015 file.ipa1<br>
> [root@glusterp1 /]# gluster volume status<br>
> Staging failed on <a href="http://glusterp3.graywitch.co.nz" rel="noreferrer" target="_blank">glusterp3.graywitch.co.nz</a>. Error: Volume volume1 does not<br>
> exist<br>
><br>
> [root@glusterp1 /]# gluster<br>
> gluster> volume info<br>
><br>
> Volume Name: volume1<br>
> Type: Replicate<br>
> Volume ID: 91eef74e-4016-4bbe-8e86-<wbr>01c88c64593f<br>
> Status: Started<br>
> Snapshot Count: 0<br>
> Number of Bricks: 1 x 3 = 3<br>
> Transport-type: tcp<br>
> Bricks:<br>
> Brick1: glusterp1.graywitch.co.nz:/<wbr>data1<br>
> Brick2: glusterp2.graywitch.co.nz:/<wbr>data1<br>
> Brick3: glusterp3.graywitch.co.nz:/<wbr>data1<br>
> Options Reconfigured:<br>
> performance.readdir-ahead: on<br>
> gluster> exit<br>
> [root@glusterp1 /]# gluster volume heal volume1 info<br>
> Brick glusterp1.graywitch.co.nz:/<wbr>data1<br>
> Status: Connected<br>
> Number of entries: 0<br>
><br>
> Brick glusterp2.graywitch.co.nz:/<wbr>data1<br>
> Status: Connected<br>
> Number of entries: 0<br>
><br>
> Brick glusterp3.graywitch.co.nz:/<wbr>data1<br>
> Status: Connected<br>
> Number of entries: 0<br>
><br>
> [root@glusterp1 /]# gluster volume info<br>
><br>
> Volume Name: volume1<br>
> Type: Replicate<br>
> Volume ID: 91eef74e-4016-4bbe-8e86-<wbr>01c88c64593f<br>
> Status: Started<br>
> Snapshot Count: 0<br>
> Number of Bricks: 1 x 3 = 3<br>
> Transport-type: tcp<br>
> Bricks:<br>
> Brick1: glusterp1.graywitch.co.nz:/<wbr>data1<br>
> Brick2: glusterp2.graywitch.co.nz:/<wbr>data1<br>
> Brick3: glusterp3.graywitch.co.nz:/<wbr>data1<br>
> Options Reconfigured:<br>
> performance.readdir-ahead: on<br>
> [root@glusterp1 /]# gluster volume heal volume1 full<br>
> Launching heal operation to perform full self heal on volume volume1 has<br>
> been unsuccessful on bricks that are down. Please check if all brick<br>
> processes are running.<br>
> [root@glusterp1 /]#<br>
> =============<br>
><br>
> On 14 October 2016 at 12:40, Thing <<a href="mailto:thing.thing@gmail.com">thing.thing@gmail.com</a>> wrote:<br>
>><br>
>> So glusterp3 is in a reject state,<br>
>><br>
>> [root@glusterp1 /]# gluster peer status<br>
>> Number of Peers: 2<br>
>><br>
>> Hostname: <a href="http://glusterp2.graywitch.co.nz" rel="noreferrer" target="_blank">glusterp2.graywitch.co.nz</a><br>
>> Uuid: 93eebe2c-9564-4bb0-975f-<wbr>2db49f12058b<br>
>> State: Peer in Cluster (Connected)<br>
>> Other names:<br>
>> glusterp2<br>
>><br>
>> Hostname: <a href="http://glusterp3.graywitch.co.nz" rel="noreferrer" target="_blank">glusterp3.graywitch.co.nz</a><br>
>> Uuid: 5d59b704-e42f-46c6-8c14-<wbr>cf052c489292<br>
>> State: Peer Rejected (Connected)<br>
>> Other names:<br>
>> glusterp3<br>
>> [root@glusterp1 /]#<br>
>><br>
>> ========<br>
>><br>
>> [root@glusterp2 /]# gluster peer status<br>
>> Number of Peers: 2<br>
>><br>
>> Hostname: <a href="http://glusterp1.graywitch.co.nz" rel="noreferrer" target="_blank">glusterp1.graywitch.co.nz</a><br>
>> Uuid: 4ece8509-033e-48d1-809f-<wbr>2079345caea2<br>
>> State: Peer in Cluster (Connected)<br>
>> Other names:<br>
>> glusterp1<br>
>><br>
>> Hostname: <a href="http://glusterp3.graywitch.co.nz" rel="noreferrer" target="_blank">glusterp3.graywitch.co.nz</a><br>
>> Uuid: 5d59b704-e42f-46c6-8c14-<wbr>cf052c489292<br>
>> State: Peer Rejected (Connected)<br>
>> Other names:<br>
>> glusterp3<br>
>> [root@glusterp2 /]#<br>
>><br>
>> ========<br>
>><br>
>> [root@glusterp3 /]# gluster peer status<br>
>> Number of Peers: 2<br>
>><br>
>> Hostname: <a href="http://glusterp1.graywitch.co.nz" rel="noreferrer" target="_blank">glusterp1.graywitch.co.nz</a><br>
>> Uuid: 4ece8509-033e-48d1-809f-<wbr>2079345caea2<br>
>> State: Peer Rejected (Connected)<br>
>> Other names:<br>
>> glusterp1<br>
>><br>
>> Hostname: <a href="http://glusterp2.graywitch.co.nz" rel="noreferrer" target="_blank">glusterp2.graywitch.co.nz</a><br>
>> Uuid: 93eebe2c-9564-4bb0-975f-<wbr>2db49f12058b<br>
>> State: Peer Rejected (Connected)<br>
>> Other names:<br>
>> glusterp2<br>
>><br>
>> ==========<br>
>> on glusterp3 gluster is dead and will not start,<br>
>><br>
>> [root@glusterp3 /]# systemctl status gluster<br>
>> ● gluster.service<br>
>> Loaded: not-found (Reason: No such file or directory)<br>
>> Active: inactive (dead)<br>
>><br>
>> [root@glusterp3 /]# systemctl restart gluster<br>
>> Failed to restart gluster.service: Unit gluster.service failed to load: No<br>
>> such file or directory.<br>
>> [root@glusterp3 /]# systemctl enable gluster<br>
>> Failed to execute operation: Access denied<br>
>> [root@glusterp3 /]# systemctl enable gluster.service<br>
>> Failed to execute operation: Access denied<br>
>> [root@glusterp3 /]# systemctl start gluster.service<br>
>> Failed to start gluster.service: Unit gluster.service failed to load: No<br>
>> such file or directory.<br>
>><br>
>> ==========<br>
>><br>
>> [root@glusterp3 /]# rpm -qa |grep gluster<br>
>> glusterfs-client-xlators-3.8.<wbr>4-1.el7.x86_64<br>
>> glusterfs-server-3.8.4-1.el7.<wbr>x86_64<br>
>> nfs-ganesha-gluster-2.3.3-1.<wbr>el7.x86_64<br>
>> glusterfs-cli-3.8.4-1.el7.x86_<wbr>64<br>
>> glusterfs-api-3.8.4-1.el7.x86_<wbr>64<br>
>> glusterfs-fuse-3.8.4-1.el7.<wbr>x86_64<br>
>> glusterfs-ganesha-3.8.4-1.el7.<wbr>x86_64<br>
>> glusterfs-3.8.4-1.el7.x86_64<br>
>> centos-release-gluster38-1.0-<wbr>1.el7.centos.noarch<br>
>> glusterfs-libs-3.8.4-1.el7.<wbr>x86_64<br>
>> [root@glusterp3 /]#<br>
>><br>
>> ?<br>
>><br>
>> On 14 October 2016 at 12:31, Thing <<a href="mailto:thing.thing@gmail.com">thing.thing@gmail.com</a>> wrote:<br>
>>><br>
>>> Hmm seem I have something rather inconsistent,<br>
>>><br>
>>> [root@glusterp1 /]# gluster volume create gv1 replica 3<br>
>>> glusterp1:/brick1/gv1 glusterp2:/brick1/gv1 glusterp3:/brick1/gv1<br>
>>> volume create: gv1: failed: Host glusterp3 is not in 'Peer in Cluster'<br>
>>> state<br>
>>> [root@glusterp1 /]# gluster peer probe glusterp3<br>
>>> peer probe: success. Host glusterp3 port 24007 already in peer list<br>
>>> [root@glusterp1 /]# gluster peer probe glusterp2<br>
>>> peer probe: success. Host glusterp2 port 24007 already in peer list<br>
>>> [root@glusterp1 /]# gluster volume create gv1 replica 3<br>
>>> glusterp1:/brick1/gv1 glusterp2:/brick1/gv1 glusterp3:/brick1/gv1<br>
>>> volume create: gv1: failed: /brick1/gv1 is already part of a volume<br>
>>> [root@glusterp1 /]# gluster volume show<br>
>>> unrecognized word: show (position 1)<br>
>>> [root@glusterp1 /]# gluster volume<br>
>>> add-brick delete info quota reset<br>
>>> status<br>
>>> barrier geo-replication list rebalance set<br>
>>> stop<br>
>>> clear-locks heal log remove-brick start<br>
>>> sync<br>
>>> create help profile replace-brick<br>
>>> statedump top<br>
>>> [root@glusterp1 /]# gluster volume list<br>
>>> volume1<br>
>>> [root@glusterp1 /]# gluster volume start gv0<br>
>>> volume start: gv0: failed: Volume gv0 does not exist<br>
>>> [root@glusterp1 /]# gluster volume start gv1<br>
>>> volume start: gv1: failed: Volume gv1 does not exist<br>
>>> [root@glusterp1 /]# gluster volume status<br>
>>> Status of volume: volume1<br>
>>> Gluster process TCP Port RDMA Port Online<br>
>>> Pid<br>
>>><br>
>>> ------------------------------<wbr>------------------------------<wbr>------------------<br>
>>> Brick glusterp1.graywitch.co.nz:/<wbr>data1 49152 0 Y<br>
>>> 2958<br>
>>> Brick glusterp2.graywitch.co.nz:/<wbr>data1 49152 0 Y<br>
>>> 2668<br>
>>> NFS Server on localhost N/A N/A N<br>
>>> N/A<br>
>>> Self-heal Daemon on localhost N/A N/A Y<br>
>>> 1038<br>
>>> NFS Server on <a href="http://glusterp2.graywitch.co.nz" rel="noreferrer" target="_blank">glusterp2.graywitch.co.nz</a> N/A N/A N<br>
>>> N/A<br>
>>> Self-heal Daemon on <a href="http://glusterp2.graywitch.co" rel="noreferrer" target="_blank">glusterp2.graywitch.co</a>.<br>
>>> nz N/A N/A Y<br>
>>> 676<br>
>>><br>
>>> Task Status of Volume volume1<br>
>>><br>
>>> ------------------------------<wbr>------------------------------<wbr>------------------<br>
>>> There are no active volume tasks<br>
>>><br>
>>> [root@glusterp1 /]#<br>
>>><br>
>>> On 14 October 2016 at 12:20, Thing <<a href="mailto:thing.thing@gmail.com">thing.thing@gmail.com</a>> wrote:<br>
>>>><br>
>>>> I deleted a gluster volume gv0 as I wanted to make it thin provisioned.<br>
>>>><br>
>>>> I have rebuilt "gv0" but I am getting a failure,<br>
>>>><br>
>>>> ==========<br>
>>>> [root@glusterp1 /]# df -h<br>
>>>> Filesystem Size Used Avail Use% Mounted on<br>
>>>> /dev/mapper/centos-root 20G 3.9G 17G 20% /<br>
>>>> devtmpfs 1.8G 0 1.8G 0% /dev<br>
>>>> tmpfs 1.8G 12K 1.8G 1% /dev/shm<br>
>>>> tmpfs 1.8G 8.9M 1.8G 1% /run<br>
>>>> tmpfs 1.8G 0 1.8G 0% /sys/fs/cgroup<br>
>>>> /dev/mapper/centos-tmp 3.9G 33M 3.9G 1% /tmp<br>
>>>> /dev/mapper/centos-home 50G 41M 50G 1% /home<br>
>>>> /dev/mapper/centos-data1 120G 33M 120G 1% /data1<br>
>>>> /dev/sda1 997M 312M 685M 32% /boot<br>
>>>> /dev/mapper/centos-var 20G 401M 20G 2% /var<br>
>>>> tmpfs 368M 0 368M 0% /run/user/1000<br>
>>>> /dev/mapper/vol_brick1-brick1 100G 33M 100G 1% /brick1<br>
>>>> [root@glusterp1 /]# mkdir /brick1/gv0<br>
>>>> [root@glusterp1 /]# gluster volume create gv0 replica 3<br>
>>>> glusterp1:/brick1/gv0 glusterp2:/brick1/gv0 glusterp3:/brick1/gv0<br>
>>>> volume create: gv0: failed: Host glusterp3 is not in 'Peer in Cluster'<br>
>>>> state<br>
>>>> [root@glusterp1 /]# gluster peer probe glusterp3<br>
>>>> peer probe: success. Host glusterp3 port 24007 already in peer list<br>
>>>> [root@glusterp1 /]# gluster volume create gv0 replica 3<br>
>>>> glusterp1:/brick1/gv0 glusterp2:/brick1/gv0 glusterp3:/brick1/gv0<br>
>>>> volume create: gv0: failed: /brick1/gv0 is already part of a volume<br>
>>>> [root@glusterp1 /]# gluster volume start gv0<br>
>>>> volume start: gv0: failed: Volume gv0 does not exist<br>
>>>> [root@glusterp1 /]# gluster volume create gv0 replica 3<br>
>>>> glusterp1:/brick1/gv0 glusterp2:/brick1/gv0 glusterp3:/brick1/gv0 --force<br>
>>>> unrecognized option --force<br>
>>>> [root@glusterp1 /]# gluster volume create gv0 replica 3<br>
>>>> glusterp1:/brick1/gv0 glusterp2:/brick1/gv0 glusterp3:/brick1/gv0<br>
>>>> volume create: gv0: failed: /brick1/gv0 is already part of a volume<br>
>>>> [root@glusterp1 /]#<br>
>>>> ==========<br>
>>>><br>
>>>> Obviously something isnt happy here but I have no idea what.......<br>
>>>><br>
>>>> how to fix this please?<br>
>>><br>
>>><br>
>><br>
><br>
><br>
</div></div>> ______________________________<wbr>_________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
<span class="HOEnZb"><font color="#888888"><br>
<br>
<br>
--<br>
Lindsay<br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/<wbr>mailman/listinfo/gluster-users</a></font></span></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><br></div>--Atin<br></div></div>
</div>