<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<br>
<br>
<div class="moz-cite-prefix">On 02/23/2016 04:34 PM, Lindsay
Mathieson wrote:<br>
</div>
<blockquote cite="mid:56CC3CC2.5080906@gmail.com" type="cite">On
23/02/2016 8:29 PM, Sahina Bose wrote:
<br>
<blockquote type="cite">Late jumping into this thread, but curious
-
<br>
<br>
Is there a specific reason that you are removing and adding a
brick? Will replace-brick not work for you?
<br>
</blockquote>
<br>
<br>
Testing procedures for replacing a failed brick (disk crash etc),
<br>
<br>
</blockquote>
<br>
The recommended way for replacing brick in a replica volume is -
replace-brick <src brick path> <destination brick path>
commit force<br>
We found that the issues related to heal that you encountered with
decreasing and increasing replica count do not exist here.<br>
<br>
In case the entire host needs to be replaced (for instance
re-installing host/reformatting disks- and assuming the brick
directories are same as before), here is a flow that works. Can you
check if this will solve your usecase?<br>
<meta charset="utf-8">
<br>
(Follow steps 1-4 in case host3 has been re-installed, and
/var/lib/glusterd re-initialized)<br>
<ol>
<li>Stop glusterd on host being replaced (say, host3)</li>
<li>Check gluster peer status from working node to obtain previous
UUID of host3 <br>
</li>
<li>Edit gluster UUID in /var/lib/glusterd/glusterd.info on host3
to previous UUID obtained in step 2. </li>
<li>Copy peer info from a working peer to /var/lib/glusterd/peers
(without the peer UUID of node being replaced, here host3) </li>
<li>Create and remove a tmp dir at volume mount points</li>
<li>Restart glusterd -- heal will start and the brick on replaced
node should be synced automatically.<br>
</li>
</ol>
<p><br>
</p>
<br>
</body>
</html>