<div dir="ltr">ok, thanks for your help</div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, May 2, 2016 at 8:21 PM, Atin Mukherjee <span dir="ltr"><<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br>
<br>
On 05/02/2016 01:30 PM, 袁仲 wrote:<br>
> I am sorry that I get you misunderstanding.<br>
> Actually I can stop the volume and even delete it. What I really want to<br>
> express is that the volume does not allow to be stopped and deleted as<br>
> some virtual machines running on it.<br>
> In the case above, P1 has crashed and I have to reinstall the system for<br>
> P1, so P1 lost the all the information about the volume and other peers<br>
> mentioned above. When P1 comes back, I want to probe it to the cluster<br>
> P2/P3 belongs to, and recover brick b1 and b2. So, what should I do?<br>
</span>Refer<br>
<a href="https://www.gluster.org/pipermail/gluster-users.old/2016-March/025917.html" rel="noreferrer" target="_blank">https://www.gluster.org/pipermail/gluster-users.old/2016-March/025917.html</a><br>
<span class="">><br>
> On Sat, Apr 30, 2016 at 11:04 PM, Atin Mukherjee<br>
</span><span class="">> <<a href="mailto:atin.mukherjee83@gmail.com">atin.mukherjee83@gmail.com</a> <mailto:<a href="mailto:atin.mukherjee83@gmail.com">atin.mukherjee83@gmail.com</a>>> wrote:<br>
><br>
> -Atin<br>
> Sent from one plus one<br>
> On 30-Apr-2016 8:20 PM, "袁仲" <<a href="mailto:yzlyourself@gmail.com">yzlyourself@gmail.com</a><br>
</span><div><div class="h5">> <mailto:<a href="mailto:yzlyourself@gmail.com">yzlyourself@gmail.com</a>>> wrote:<br>
> ><br>
> > I have a scenes like this:<br>
> ><br>
> ><br>
> > I have 3 peers. eg. P1, P2 and P3, and each of them has 2 bricks,<br>
> ><br>
> > e.g. P1 have 2 bricks, b1 and b2.<br>
> ><br>
> > P2 has 2 bricks, b3 and b4.<br>
> ><br>
> > P3 has 2 bricks, b5 and b6.<br>
> ><br>
> > Based that above, I create a volume (afr volume) like this:<br>
> ><br>
> > b1 and b3 make up a replicate subvolume rep-sub1<br>
> ><br>
> > b4 and b5 make up a replicate subvolume rep-sub2<br>
> ><br>
> > b2 and b6 make up a replicate sub volume rep-sub3<br>
> ><br>
> > And rep-sub1,2,3 make up a distribute volume, AND start the volume.<br>
> ><br>
> ><br>
> > now, p1 has a crash or it just disconnected. I want to detach P1 and the volume has started absolutely can’t be stop or deleted. so I did this: gluster peer detach host-P1.<br>
><br>
> This is destructive, detaching a peer hosting bricks is definitely<br>
> needs to be blocked otherwise technically you loose the volume as<br>
> Gluster is a distributed file system. Have you tried to analyze why<br>
> the node has crashed? And is there any specific reason why do you<br>
> want to stop the volume as replication gives you the high<br>
> availability and your volume would still be accessible. Even if you<br>
> want to stop the volume, try the following:<br>
><br>
> 1. Restart glusterd, if it still fails go to 2nd step<br>
> 2. Go for a peer replacement procedure<br>
><br>
> Otherwise, you may try out volume stop force, it may work too.<br>
><br>
> ><br>
> > but it does not work, the reason is that P1 has bricks on it according to the glusterfs error message printed on shell.<br>
> ><br>
> ><br>
> > so, I comment out the code leaded the error above, and try again. I it really works. Its amazing. And the VM runs on the volume is all right.<br>
> ><br>
> > BUT, this leads a big problem that the glusterd restart failed. Both on P2 and P3, but when I remove the stuff below /var/lib/glusterfs/vols/, it restarts success. so, I wander that there is something about volume.<br>
> ><br>
> ><br>
> > my question is,<br>
> ><br>
> > if there is a method to detach P1 in the scenes above.<br>
> ><br>
> > or what issue i will meet if I make it works through modify the code source.<br>
> ><br>
> ><br>
> > thanks so much.<br>
> ><br>
> ><br>
> > _______________________________________________<br>
> > Gluster-users mailing list<br>
</div></div>> > <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a> <mailto:<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>><br>
> > <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
<div class="HOEnZb"><div class="h5">><br>
><br>
><br>
><br>
> _______________________________________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
><br>
</div></div></blockquote></div><br></div>