<div dir="ltr">So glusterp3 is in a reject state,<br><br>[root@glusterp1 /]# gluster peer status<br>Number of Peers: 2<br><br>Hostname: <a href="http://glusterp2.graywitch.co.nz">glusterp2.graywitch.co.nz</a><br>Uuid: 93eebe2c-9564-4bb0-975f-2db49f12058b<br>State: Peer in Cluster (Connected)<br>Other names:<br>glusterp2<br><br>Hostname: <a href="http://glusterp3.graywitch.co.nz">glusterp3.graywitch.co.nz</a><br>Uuid: 5d59b704-e42f-46c6-8c14-cf052c489292<br>State: Peer Rejected (Connected)<br>Other names:<br>glusterp3<br>[root@glusterp1 /]#<br><br>========<br><br>[root@glusterp2 /]# gluster peer status<br>Number of Peers: 2<br><br>Hostname: <a href="http://glusterp1.graywitch.co.nz">glusterp1.graywitch.co.nz</a><br>Uuid: 4ece8509-033e-48d1-809f-2079345caea2<br>State: Peer in Cluster (Connected)<br>Other names:<br>glusterp1<br><br>Hostname: <a href="http://glusterp3.graywitch.co.nz">glusterp3.graywitch.co.nz</a><br>Uuid: 5d59b704-e42f-46c6-8c14-cf052c489292<br>State: Peer Rejected (Connected)<br>Other names:<br>glusterp3<br>[root@glusterp2 /]#<br><br>========<br><br>[root@glusterp3 /]# gluster peer status<br>Number of Peers: 2<br><br>Hostname: <a href="http://glusterp1.graywitch.co.nz">glusterp1.graywitch.co.nz</a><br>Uuid: 4ece8509-033e-48d1-809f-2079345caea2<br>State: Peer Rejected (Connected)<br>Other names:<br>glusterp1<br><br>Hostname: <a href="http://glusterp2.graywitch.co.nz">glusterp2.graywitch.co.nz</a><br>Uuid: 93eebe2c-9564-4bb0-975f-2db49f12058b<br>State: Peer Rejected (Connected)<br>Other names:<br>glusterp2<br><br>==========<br>on glusterp3 gluster is dead and will not start,<br><br>[root@glusterp3 /]# systemctl status gluster<br>● gluster.service<br> Loaded: not-found (Reason: No such file or directory)<br> Active: inactive (dead)<br><br>[root@glusterp3 /]# systemctl restart gluster<br>Failed to restart gluster.service: Unit gluster.service failed to load: No such file or directory.<br>[root@glusterp3 /]# systemctl enable gluster<br>Failed to execute operation: Access denied<br>[root@glusterp3 /]# systemctl enable gluster.service<br>Failed to execute operation: Access denied<br>[root@glusterp3 /]# systemctl start gluster.service<br>Failed to start gluster.service: Unit gluster.service failed to load: No such file or directory.<br><br>==========<br><br>[root@glusterp3 /]# rpm -qa |grep gluster<br>glusterfs-client-xlators-3.8.4-1.el7.x86_64<br>glusterfs-server-3.8.4-1.el7.x86_64<br>nfs-ganesha-gluster-2.3.3-1.el7.x86_64<br>glusterfs-cli-3.8.4-1.el7.x86_64<br>glusterfs-api-3.8.4-1.el7.x86_64<br>glusterfs-fuse-3.8.4-1.el7.x86_64<br>glusterfs-ganesha-3.8.4-1.el7.x86_64<br>glusterfs-3.8.4-1.el7.x86_64<br>centos-release-gluster38-1.0-1.el7.centos.noarch<br>glusterfs-libs-3.8.4-1.el7.x86_64<br>[root@glusterp3 /]#<br><br>?<br></div><div class="gmail_extra"><br><div class="gmail_quote">On 14 October 2016 at 12:31, Thing <span dir="ltr"><<a href="mailto:thing.thing@gmail.com" target="_blank">thing.thing@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hmm seem I have something rather inconsistent,<br><br>[root@glusterp1 /]# gluster volume create gv1 replica 3 glusterp1:/brick1/gv1 glusterp2:/brick1/gv1 glusterp3:/brick1/gv1<br>volume create: gv1: failed: Host glusterp3 is not in 'Peer in Cluster' state<span class=""><br>[root@glusterp1 /]# gluster peer probe glusterp3<br>peer probe: success. Host glusterp3 port 24007 already in peer list<br></span>[root@glusterp1 /]# gluster peer probe glusterp2<br>peer probe: success. Host glusterp2 port 24007 already in peer list<br>[root@glusterp1 /]# gluster volume create gv1 replica 3 glusterp1:/brick1/gv1 glusterp2:/brick1/gv1 glusterp3:/brick1/gv1<br>volume create: gv1: failed: /brick1/gv1 is already part of a volume<br>[root@glusterp1 /]# gluster volume show<br>unrecognized word: show (position 1)<span class=""><br>[root@glusterp1 /]# gluster volume <br></span>add-brick delete info quota reset status <br>barrier geo-replication list rebalance set stop <br>clear-locks heal log remove-brick start sync <br>create help profile replace-brick statedump top <br>[root@glusterp1 /]# gluster volume list<br>volume1<span class=""><br>[root@glusterp1 /]# gluster volume start gv0<br>volume start: gv0: failed: Volume gv0 does not exist<br></span>[root@glusterp1 /]# gluster volume start gv1<br>volume start: gv1: failed: Volume gv1 does not exist<br>[root@glusterp1 /]# gluster volume status<br>Status of volume: volume1<br>Gluster process <wbr> TCP Port RDMA Port Online Pid<br>------------------------------<wbr>------------------------------<wbr>------------------<br>Brick glusterp1.graywitch.co.nz:/<wbr>data1 49152 0 Y 2958 <br>Brick glusterp2.graywitch.co.nz:/<wbr>data1 49152 0 Y 2668 <br>NFS Server on localhost N/A N/A N N/A <br>Self-heal Daemon on localhost N/A N/A Y 1038 <br>NFS Server on <a href="http://glusterp2.graywitch.co.nz" target="_blank">glusterp2.graywitch.co.nz</a> N/A N/A N N/A <br>Self-heal Daemon on <a href="http://glusterp2.graywitch.co" target="_blank">glusterp2.graywitch.co</a>.<br>nz <wbr> N/A N/A Y 676 <br> <br>Task Status of Volume volume1<br>------------------------------<wbr>------------------------------<wbr>------------------<br>There are no active volume tasks<br> <br>[root@glusterp1 /]#<br></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On 14 October 2016 at 12:20, Thing <span dir="ltr"><<a href="mailto:thing.thing@gmail.com" target="_blank">thing.thing@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div>I deleted a gluster volume gv0 as I wanted to make it thin provisioned.<br><br></div>I have rebuilt "gv0" but I am getting a failure,<br><br>==========<br>[root@glusterp1 /]# df -h<br>Filesystem Size Used Avail Use% Mounted on<br>/dev/mapper/centos-root <wbr> 20G 3.9G 17G 20% /<br>devtmpfs 1.8G 0 1.8G 0% /dev<br>tmpfs 1.8G 12K 1.8G 1% /dev/shm<br>tmpfs 1.8G 8.9M 1.8G 1% /run<br>tmpfs 1.8G 0 1.8G 0% /sys/fs/cgroup<br>/dev/mapper/centos-tmp 3.9G 33M 3.9G 1% /tmp<br>/dev/mapper/centos-home <wbr> 50G 41M 50G 1% /home<br>/dev/mapper/centos-data1 120G 33M 120G 1% /data1<br>/dev/sda1 997M 312M 685M 32% /boot<br>/dev/mapper/centos-var <wbr> 20G 401M 20G 2% /var<br>tmpfs 368M 0 368M 0% /run/user/1000<br>/dev/mapper/vol_brick1-brick1 100G 33M 100G 1% /brick1<br>[root@glusterp1 /]# mkdir /brick1/gv0<br>[root@glusterp1 /]# gluster volume create gv0 replica 3 glusterp1:/brick1/gv0 glusterp2:/brick1/gv0 glusterp3:/brick1/gv0<br>volume create: gv0: failed: Host glusterp3 is not in 'Peer in Cluster' state<br>[root@glusterp1 /]# gluster peer probe glusterp3<br>peer probe: success. Host glusterp3 port 24007 already in peer list<br>[root@glusterp1 /]# gluster volume create gv0 replica 3 glusterp1:/brick1/gv0 glusterp2:/brick1/gv0 glusterp3:/brick1/gv0<br>volume create: gv0: failed: /brick1/gv0 is already part of a volume<br>[root@glusterp1 /]# gluster volume start gv0<br>volume start: gv0: failed: Volume gv0 does not exist<br>[root@glusterp1 /]# gluster volume create gv0 replica 3 glusterp1:/brick1/gv0 glusterp2:/brick1/gv0 glusterp3:/brick1/gv0 --force<br>unrecognized option --force<br>[root@glusterp1 /]# gluster volume create gv0 replica 3 glusterp1:/brick1/gv0 glusterp2:/brick1/gv0 glusterp3:/brick1/gv0 <br>volume create: gv0: failed: /brick1/gv0 is already part of a volume<br>[root@glusterp1 /]# <br>==========<br><br></div>Obviously something isnt happy here but I have no idea what.......<br><br></div>how to fix this please?<br></div>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div>