<div dir="ltr">Thanks for that advice. It worked. Setting the UUID in <a href="http://glusterd.info">glusterd.info</a> was the bit I missed.<div><br></div><div>It seemed to work without the setfattr step in my particular case.</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Sep 22, 2016 at 11:05 AM, Serkan Çoban <span dir="ltr"><<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Here are the steps for replacing a failed node:<br>
<br>
<br>
1- In one of the other servers run "grep thaila<br>
/var/lib/glusterd/peers/* | cut -d: -f1 | cut -d/ -f6" and note the<br>
UUID<br>
2- stop glusterd on failed server and add "UUID=uuid_from_previous<br>
step" to /var/lib/glusterd/<a href="http://glusterd.info" rel="noreferrer" target="_blank">glusterd.<wbr>info</a> and start glusterd<br>
3- run "gluster peer probe calliope"<br>
4- restart glusterd<br>
5- now gluster peer status should show all the peers. if not probe<br>
them manually as above.<br>
6-for all the bricks run the command "setfattr -n<br>
trusted.glusterfs.volume-id -v 0x$(grep volume-id<br>
/var/lib/glusterd/vols/vol_<wbr>name/info | cut -d= -f2 | sed 's/-//g')<br>
brick_name"<br>
7 restart glusterd and everythimg should be fine.<br>
<br>
I think I read the steps from this link:<br>
<a href="https://support.rackspace.com/how-to/recover-from-a-failed-server-in-a-glusterfs-array/" rel="noreferrer" target="_blank">https://support.rackspace.com/<wbr>how-to/recover-from-a-failed-<wbr>server-in-a-glusterfs-array/</a><br>
Look to the "keep the ip address" part.<br>
<div><div class="h5"><br>
<br>
On Thu, Sep 22, 2016 at 5:16 PM, Tony Schreiner<br>
<<a href="mailto:anthony.schreiner@bc.edu">anthony.schreiner@bc.edu</a>> wrote:<br>
> I set uo a dispersed volume with 1 x (3 + 1) nodes ( i do know that 3+1 is<br>
> not optimal).<br>
> Originally created in version 3.7 but recently upgraded without issue to<br>
> 3.8.<br>
><br>
> # gluster vol info<br>
> Volume Name: rvol<br>
> Type: Disperse<br>
> Volume ID: e8f15248-d9de-458e-9896-<wbr>f1a5782dcf74<br>
> Status: Started<br>
> Snapshot Count: 0<br>
> Number of Bricks: 1 x (3 + 1) = 4<br>
> Transport-type: tcp<br>
> Bricks:<br>
> Brick1: calliope:/brick/p1<br>
> Brick2: euterpe:/brick/p1<br>
> Brick3: lemans:/brick/p1<br>
> Brick4: thalia:/brick/p1<br>
> Options Reconfigured:<br>
> performance.readdir-ahead: on<br>
> nfs.disable: off<br>
><br>
> I inadvertently allowed one of the nodes (thalia) to be reinstalled; which<br>
> overwrote the system, but not the brick, and I need guidance in getting it<br>
> back into the volume.<br>
><br>
> (on lemans)<br>
> gluster peer status<br>
> Number of Peers: 3<br>
><br>
> Hostname: calliope<br>
> Uuid: 72373eb1-8047-405a-a094-<wbr>891e559755da<br>
> State: Peer in Cluster (Connected)<br>
><br>
> Hostname: euterpe<br>
> Uuid: 9fafa5c4-1541-4aa0-9ea2-<wbr>923a756cadbb<br>
> State: Peer in Cluster (Connected)<br>
><br>
> Hostname: thalia<br>
> Uuid: 843169fa-3937-42de-8fda-<wbr>9819efc75fe8<br>
> State: Peer Rejected (Connected)<br>
><br>
> the thalia peer is rejected. If I try to peer probe thalia I am told it<br>
> already part of the pool. If from thalia, I try to peer probe one of the<br>
> others, I am told that they are already part of another pool.<br>
><br>
> I have tried removing the thalia brick with<br>
> gluster vol remove-brick rvol thalia:/brick/p1 start<br>
> but get the error<br>
> volume remove-brick start: failed: Remove brick incorrect brick count of 1<br>
> for disperse 4<br>
><br>
> I am not finding much guidance for this particular situation. I could use a<br>
> suggestion on how to recover. It's a lab situation so no biggie if I lose<br>
> it.<br>
> Cheers<br>
><br>
> Tony Schreiner<br>
><br>
</div></div>> ______________________________<wbr>_________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
</blockquote></div><br></div>