<div dir="ltr"><div><div><div>Hi Gaurav,<br><br></div>Have you got the time to analyze the logs.<br><br></div>Regards,<br></div>Abhishek<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Feb 25, 2016 at 11:23 AM, Gaurav Garg <span dir="ltr"><<a href="mailto:ggarg@redhat.com" target="_blank">ggarg@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">sure,<br>
<br>
Thanks,<br>
~Gaurav<br>
<span class="im HOEnZb"><br>
---- Original Message -----<br>
From: "ABHISHEK PALIWAL" <<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>><br>
To: "Gaurav Garg" <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>><br>
Cc: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
</span><div class="HOEnZb"><div class="h5">Sent: Thursday, February 25, 2016 10:40:11 AM<br>
Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node<br>
<br>
Hi Gaurav,<br>
<br>
<br>
Here, I am sharing the log.zip file having logs for both of the nodes and<br>
other logs as well.<br>
<br>
Now I think we can analyze the logs and find out the actual problem of this<br>
issue.<br>
<br>
Regards,<br>
Abhishek<br>
<br>
On Wed, Feb 24, 2016 at 2:44 PM, Gaurav Garg <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>> wrote:<br>
<br>
> hi abhishek,<br>
><br>
> i need to look further why are you falling in this situation. file name<br>
> and uuid in /var/lib/glusterd/peers should be same. each file in<br>
> /var/lib/glusterd/peers having information about its peer in the cluster.<br>
><br>
> could you join #gluster channel on freenode. just ping me (irc name:<br>
> ggarg) after joining the channel.<br>
><br>
> Thanks,<br>
> Gaurav<br>
><br>
><br>
> ----- Original Message -----<br>
> From: "ABHISHEK PALIWAL" <<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>><br>
> To: "Gaurav Garg" <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>><br>
> Cc: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
> Sent: Wednesday, February 24, 2016 12:31:51 PM<br>
> Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node<br>
><br>
> Hi Gaurav,<br>
><br>
> I have noticed one more thing in etc-glusterfs-glusterd.vol.log file with<br>
> respect to UUID of Peer <10.32.1.144><br>
> It has two UUID<br>
> Before removing<br>
><br>
> UUID is - b88c74b9-457d-4864-9fe6-403f6934d7d1 and after inserting the node<br>
> UUID is - 5ec06937-5f85-4a9d-b29e-4227bbb7b4fa<br>
><br>
> Also have one file in glusterd/peers/ directory with the same name of first<br>
> UUID.<br>
><br>
> What does this file mean in peers directory? is this file providing some<br>
> kind of linking between both of the UUID?<br>
><br>
> Please find this file as an attachment.<br>
><br>
> Regards,<br>
> Abhishek<br>
><br>
> On Wed, Feb 24, 2016 at 12:06 PM, Gaurav Garg <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>> wrote:<br>
><br>
> > Hi abhishek,<br>
> ><br>
> > yes i looked into configuration file's that you have provided. there<br>
> every<br>
> > things seems to be fine.<br>
> ><br>
> > seems like some other problem. i will look into it today and will come<br>
> > back to you.<br>
> ><br>
> > thanks,<br>
> ><br>
> > ~Gaurav<br>
> ><br>
> > ----- Original Message -----<br>
> > From: "ABHISHEK PALIWAL" <<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>><br>
> > To: "Gaurav Garg" <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>><br>
> > Cc: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
> > Sent: Wednesday, February 24, 2016 12:02:47 PM<br>
> > Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node<br>
> ><br>
> > Hi Gaurav,<br>
> ><br>
> > Have you get the time to see the logs files which you asked yesterday?<br>
> ><br>
> > Regards,<br>
> > Abhishek<br>
> ><br>
> > On Tue, Feb 23, 2016 at 3:05 PM, ABHISHEK PALIWAL <<br>
> <a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a><br>
> > ><br>
> > wrote:<br>
> ><br>
> > > Hi Gaurav,<br>
> > ><br>
> > > Please find the vol.tar file.<br>
> > ><br>
> > > Regards,<br>
> > > Abhishek<br>
> > ><br>
> > > On Tue, Feb 23, 2016 at 2:37 PM, Gaurav Garg <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>> wrote:<br>
> > ><br>
> > >> Hi abhishek,<br>
> > >><br>
> > >> >> But after analyzing the following logs from the 1st board seems<br>
> that<br>
> > >> the<br>
> > >> process which will update the second brick in output of "# gluster<br>
> > volume<br>
> > >> status c_glusterfs" takes sometime to update this table and before the<br>
> > >> updation of this table remove-brick is getting executed that is why it<br>
> > is<br>
> > >> getting failed.<br>
> > >><br>
> > >> It should not take that much of time. If your peer probe is successful<br>
> > >> and you are able to<br>
> > >> see 2nd broad peer entry in #gluster peer status command then it have<br>
> > >> updated all information<br>
> > >> of volume internally.<br>
> > >><br>
> > >> your gluster volume status showing 2nd board entry:<br>
> > >><br>
> > >> Brick 10.32.0.48:/opt/lvmdir/c2/brick    49153   0     Y<br>
> > >> 2537<br>
> > >> Self-heal Daemon on localhost        N/A    N/A    Y<br>
> > >> 5577<br>
> > >> Self-heal Daemon on 10.32.1.144Â Â Â Â Â Â Â N/AÂ Â Â Â N/AÂ Â Â Â Y<br>
> > >> 3850<br>
> > >><br>
> > >> but its not showing 2nd board brick entry.<br>
> > >><br>
> > >><br>
> > >> Did you perform any manual operation with configuration file which<br>
> > >> resides in /var/lib/glusterd/* ?<br>
> > >><br>
> > >> could you attach/paste the file<br>
> > >> /var/lib/glusterd/vols/c_glusterfs/trusted-*.tcp-fuse.vol file.<br>
> > >><br>
> > >><br>
> > >> Thanks,<br>
> > >><br>
> > >> Regards,<br>
> > >> Gaurav<br>
> > >><br>
> > >> ----- Original Message -----<br>
> > >> From: "ABHISHEK PALIWAL" <<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>><br>
> > >> To: "Gaurav Garg" <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>><br>
> > >> Cc: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
> > >> Sent: Tuesday, February 23, 2016 1:33:30 PM<br>
> > >> Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster node<br>
> > >><br>
> > >> Hi Gaurav,<br>
> > >><br>
> > >> For the network connectivity I am doing peer probe to the 10.32.1.144<br>
> > i.e.<br>
> > >> 2nd board thats working fine means connectivity is there.<br>
> > >><br>
> > >> #peer probe 10.32.1.144<br>
> > >><br>
> > >> if the above command get success<br>
> > >><br>
> > >> I executed the the remove-brick command which is getting failed.<br>
> > >><br>
> > >> So, now it seems the the peer probe will not give the correct<br>
> > >> connectivity<br>
> > >> status to execute the remove-brick command.<br>
> > >><br>
> > >> But after analyzing the following logs from the 1st board seems that<br>
> the<br>
> > >> process which will update the second brick in output of "# gluster<br>
> > volume<br>
> > >> status c_glusterfs" takes sometime to update this table and before the<br>
> > >> updation of this table remove-brick is getting executed that is why it<br>
> > is<br>
> > >> getting failed.<br>
> > >><br>
> > >> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++<br>
> > >><br>
> > >> *1st board:*<br>
> > >> # gluster volume info<br>
> > >> status<br>
> > >> gluster volume status c_glusterfs<br>
> > >> Volume Name: c_glusterfs<br>
> > >> Type: Replicate<br>
> > >> Volume ID: 32793e91-6f88-4f29-b3e4-0d53d02a4b99<br>
> > >> Status: Started<br>
> > >> Number of Bricks: 1 x 2 = 2<br>
> > >> Transport-type: tcp<br>
> > >> Bricks:<br>
> > >> Brick1: 10.32.0.48:/opt/lvmdir/c2/brick<br>
> > >> Brick2: 10.32.1.144:/opt/lvmdir/c2/brick<br>
> > >> Options Reconfigured:<br>
> > >> nfs.disable: on<br>
> > >> network.ping-timeout: 4<br>
> > >> performance.readdir-ahead: on<br>
> > >> # gluster peer status<br>
> > >> Number of Peers: 1<br>
> > >><br>
> > >> Hostname: 10.32.1.144<br>
> > >> Uuid: b88c74b9-457d-4864-9fe6-403f6934d7d1<br>
> > >> State: Peer in Cluster (Connected)<br>
> > >> # gluster volume status c_glusterfs<br>
> > >> Status of volume: c_glusterfs<br>
> > >> Gluster process               TCP Port RDMA Port<br>
> Online<br>
> > >> Pid<br>
> > >><br>
> > >><br>
> ><br>
> ------------------------------------------------------------------------------<br>
> > >><br>
> > >> Brick 10.32.0.48:/opt/lvmdir/c2/brick    49153   0     Y<br>
> > >> 2537<br>
> > >> Self-heal Daemon on localhost        N/A    N/A    Y<br>
> > >> 5577<br>
> > >> Self-heal Daemon on 10.32.1.144Â Â Â Â Â Â Â N/AÂ Â Â Â N/AÂ Â Â Â Y<br>
> > >> 3850<br>
> > >><br>
> > >> Task Status of Volume c_glusterfs<br>
> > >><br>
> > >><br>
> ><br>
> ------------------------------------------------------------------------------<br>
> > >><br>
> > >> There are no active volume tasks<br>
> > >><br>
> > >> +++++++++++++++++++++++++++++++++++++++++++++++<br>
> > >><br>
> > >> I'll try this with some delay or wait to remove-brick until the #<br>
> > gluster<br>
> > >> volume status c_glusterfs command show second brick in the list.<br>
> > >><br>
> > >> May we this approach will resolve the issue.<br>
> > >><br>
> > >> Please comment, If you are agree with my observation<br>
> > >><br>
> > >> Regards,<br>
> > >> Abhishek<br>
> > >><br>
> > >> On Tue, Feb 23, 2016 at 1:10 PM, ABHISHEK PALIWAL <<br>
> > >> <a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>><br>
> > >> wrote:<br>
> > >><br>
> > >> > Hi Gaurav,<br>
> > >> ><br>
> > >> > In my case we are removing the brick in the offline state with the<br>
> > force<br>
> > >> > option like in the following way:<br>
> > >> ><br>
> > >> ><br>
> > >> ><br>
> > >> > *gluster volume remove-brick %s replica 1 %s:%s force --mode=script*<br>
> > >> > but still getting the failure or remove-brick<br>
> > >> ><br>
> > >> > it seems that brick is not present which we are trying to remove<br>
> here<br>
> > >> are<br>
> > >> > the log snippet of both of the boards<br>
> > >> ><br>
> > >> ><br>
> > >> > *1st board:*<br>
> > >> > # gluster volume info<br>
> > >> > status<br>
> > >> > gluster volume status c_glusterfs<br>
> > >> > Volume Name: c_glusterfs<br>
> > >> > Type: Replicate<br>
> > >> > Volume ID: 32793e91-6f88-4f29-b3e4-0d53d02a4b99<br>
> > >> > Status: Started<br>
> > >> > Number of Bricks: 1 x 2 = 2<br>
> > >> > Transport-type: tcp<br>
> > >> > Bricks:<br>
> > >> > Brick1: 10.32.0.48:/opt/lvmdir/c2/brick<br>
> > >> > Brick2: 10.32.1.144:/opt/lvmdir/c2/brick<br>
> > >> > Options Reconfigured:<br>
> > >> > nfs.disable: on<br>
> > >> > network.ping-timeout: 4<br>
> > >> > performance.readdir-ahead: on<br>
> > >> > # gluster peer status<br>
> > >> > Number of Peers: 1<br>
> > >> ><br>
> > >> > Hostname: 10.32.1.144<br>
> > >> > Uuid: b88c74b9-457d-4864-9fe6-403f6934d7d1<br>
> > >> > State: Peer in Cluster (Connected)<br>
> > >> > # gluster volume status c_glusterfs<br>
> > >> > Status of volume: c_glusterfs<br>
> > >> > Gluster process               TCP Port RDMA Port<br>
> > Online<br>
> > >> > Pid<br>
> > >> ><br>
> > >><br>
> ><br>
> ------------------------------------------------------------------------------<br>
> > >> ><br>
> > >> > Brick 10.32.0.48:/opt/lvmdir/c2/brick    49153   0     Y<br>
> > >> > 2537<br>
> > >> > Self-heal Daemon on localhost        N/A    N/A    Y<br>
> > >> > 5577<br>
> > >> > Self-heal Daemon on 10.32.1.144Â Â Â Â Â Â Â N/AÂ Â Â Â N/AÂ Â Â Â Y<br>
> > >> > 3850<br>
> > >> ><br>
> > >> > Task Status of Volume c_glusterfs<br>
> > >> ><br>
> > >><br>
> ><br>
> ------------------------------------------------------------------------------<br>
> > >> ><br>
> > >> > There are no active volume tasks<br>
> > >> ><br>
> > >> > *2nd Board*:<br>
> > >> ><br>
> > >> > # gluster volume info<br>
> > >> > status<br>
> > >> > gluster volume status c_glusterfs<br>
> > >> > gluster volume heal c_glusterfs info<br>
> > >> ><br>
> > >> > Volume Name: c_glusterfs<br>
> > >> > Type: Replicate<br>
> > >> > Volume ID: 32793e91-6f88-4f29-b3e4-0d53d02a4b99<br>
> > >> > Status: Started<br>
> > >> > Number of Bricks: 1 x 2 = 2<br>
> > >> > Transport-type: tcp<br>
> > >> > Bricks:<br>
> > >> > Brick1: 10.32.0.48:/opt/lvmdir/c2/brick<br>
> > >> > Brick2: 10.32.1.144:/opt/lvmdir/c2/brick<br>
> > >> > Options Reconfigured:<br>
> > >> > performance.readdir-ahead: on<br>
> > >> > network.ping-timeout: 4<br>
> > >> > nfs.disable: on<br>
> > >> > # gluster peer status<br>
> > >> > Number of Peers: 1<br>
> > >> ><br>
> > >> > Hostname: 10.32.0.48<br>
> > >> > Uuid: e7c4494e-aa04-4909-81c9-27a462f6f9e7<br>
> > >> > State: Peer in Cluster (Connected)<br>
> > >> > # gluster volume status c_glusterfs<br>
> > >> > Status of volume: c_glusterfs<br>
> > >> > Gluster process               TCP Port RDMA Port<br>
> > Online<br>
> > >> > Pid<br>
> > >> ><br>
> > >><br>
> ><br>
> ------------------------------------------------------------------------------<br>
> > >> ><br>
> > >> > Brick 10.32.0.48:/opt/lvmdir/c2/brick    49153   0     Y<br>
> > >> > 2537<br>
> > >> > Self-heal Daemon on localhost        N/A    N/A    Y<br>
> > >> > 3850<br>
> > >> > Self-heal Daemon on 10.32.0.48Â Â Â Â Â Â Â N/AÂ Â Â Â N/AÂ Â Â Â Y<br>
> > >> > 5577<br>
> > >> ><br>
> > >> > Task Status of Volume c_glusterfs<br>
> > >> ><br>
> > >><br>
> ><br>
> ------------------------------------------------------------------------------<br>
> > >> ><br>
> > >> > There are no active volume tasks<br>
> > >> ><br>
> > >> > Do you know why these logs are not showing the Brick info at the<br>
> time<br>
> > of<br>
> > >> > gluster volume status.<br>
> > >> > Because we are not able to collect the logs of cmd_history.log file<br>
> > from<br>
> > >> > the 2nd board.<br>
> > >> ><br>
> > >> > Regards,<br>
> > >> > Abhishek<br>
> > >> ><br>
> > >> ><br>
> > >> > On Tue, Feb 23, 2016 at 12:02 PM, Gaurav Garg <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>><br>
> > wrote:<br>
> > >> ><br>
> > >> >> Hi abhishek,<br>
> > >> >><br>
> > >> >> >> Can we perform remove-brick operation on the offline brick? what<br>
> > is<br>
> > >> the<br>
> > >> >> meaning of offline and online brick?<br>
> > >> >><br>
> > >> >> No, you can't perform remove-brick operation on the offline brick.<br>
> > >> brick<br>
> > >> >> is offline means brick process is not running. you can see it by<br>
> > >> executing<br>
> > >> >> #gluster volume status. If brick is offline then respective brick<br>
> > will<br>
> > >> show<br>
> > >> >> "N" entry in Online column of #gluster volume status command.<br>
> > >> Alternatively<br>
> > >> >> you can also check whether glusterfsd process for that brick is<br>
> > >> running or<br>
> > >> >> not by executing #ps aux | grep glusterfsd, this command will list<br>
> > out<br>
> > >> all<br>
> > >> >> the brick process you can filter out from them, which one is<br>
> online,<br>
> > >> which<br>
> > >> >> one is not.<br>
> > >> >><br>
> > >> >> But if you want to perform remove-brick operation on the offline<br>
> > brick<br>
> > >> >> then you need to execute it with force option. #gluster volume<br>
> > >> remove-brick<br>
> > >> >> <volname> hostname:/brick_name force. This might lead to data loss.<br>
> > >> >><br>
> > >> >><br>
> > >> >><br>
> > >> >> >> Also, Is there any logic in gluster through which we can check<br>
> the<br>
> > >> >> connectivity of node established or not before performing the any<br>
> > >> >> operation<br>
> > >> >> on brick?<br>
> > >> >><br>
> > >> >> Yes, you can check it by executing #gluster peer status command.<br>
> > >> >><br>
> > >> >><br>
> > >> >> Thanks,<br>
> > >> >><br>
> > >> >> ~Gaurav<br>
> > >> >><br>
> > >> >><br>
> > >> >> ----- Original Message -----<br>
> > >> >> From: "ABHISHEK PALIWAL" <<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>><br>
> > >> >> To: "Gaurav Garg" <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>><br>
> > >> >> Cc: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
> > >> >> Sent: Tuesday, February 23, 2016 11:50:43 AM<br>
> > >> >> Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster<br>
> > node<br>
> > >> >><br>
> > >> >> Hi Gaurav,<br>
> > >> >><br>
> > >> >> one general question related to gluster bricks.<br>
> > >> >><br>
> > >> >> Can we perform remove-brick operation on the offline brick? what is<br>
> > the<br>
> > >> >> meaning of offline and online brick?<br>
> > >> >> Also, Is there any logic in gluster through which we can check the<br>
> > >> >> connectivity of node established or not before performing the any<br>
> > >> >> operation<br>
> > >> >> on brick?<br>
> > >> >><br>
> > >> >> Regards,<br>
> > >> >> Abhishek<br>
> > >> >><br>
> > >> >> On Mon, Feb 22, 2016 at 2:42 PM, Gaurav Garg <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>><br>
> > wrote:<br>
> > >> >><br>
> > >> >> > Hi abhishek,<br>
> > >> >> ><br>
> > >> >> > I went through your logs of node 1 and by looking glusterd logs<br>
> its<br>
> > >> >> > clearly indicate that your 2nd node (10.32.1.144) have<br>
> disconnected<br>
> > >> from<br>
> > >> >> > the cluster, because of that remove-brick operation failed. I<br>
> think<br>
> > >> you<br>
> > >> >> > need to check your network interface.<br>
> > >> >> ><br>
> > >> >> > But surprising things is that i did not see duplicate peer entry<br>
> in<br>
> > >> >> > #gluster peer status command output.<br>
> > >> >> ><br>
> > >> >> > May be i will get some more information from your (10.32.1.144)<br>
> 2nd<br>
> > >> node<br>
> > >> >> > logs. Could you also attach your 2nd node logs.<br>
> > >> >> ><br>
> > >> >> > after restarting glusterd, are you seeing duplicate peer entry in<br>
> > >> >> #gluster<br>
> > >> >> > peer status command output ?<br>
> > >> >> ><br>
> > >> >> > will wait for 2nd node logs for further analyzing duplicate peer<br>
> > >> entry<br>
> > >> >> > problem.<br>
> > >> >> ><br>
> > >> >> > Thanks,<br>
> > >> >> ><br>
> > >> >> > ~Gaurav<br>
> > >> >> ><br>
> > >> >> > ----- Original Message -----<br>
> > >> >> > From: "ABHISHEK PALIWAL" <<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>><br>
> > >> >> > To: "Gaurav Garg" <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>><br>
> > >> >> > Cc: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
> > >> >> > Sent: Monday, February 22, 2016 12:48:55 PM<br>
> > >> >> > Subject: Re: [Gluster-users] Issue in Adding/Removing the gluster<br>
> > >> node<br>
> > >> >> ><br>
> > >> >> > Hi Gaurav,<br>
> > >> >> ><br>
> > >> >> > Here, You can find the attached logs for the boards in case of<br>
> > >> >> remove-brick<br>
> > >> >> > failure.<br>
> > >> >> > In these logs we do not have the cmd_history and<br>
> > >> >> > etc-glusterfs-glusterd.vol.log for the second board.<br>
> > >> >> ><br>
> > >> >> > May be for that we need to some more time.<br>
> > >> >> ><br>
> > >> >> ><br>
> > >> >> > Regards,<br>
> > >> >> > Abhishek<br>
> > >> >> ><br>
> > >> >> > On Mon, Feb 22, 2016 at 10:18 AM, Gaurav Garg <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>><br>
> > >> wrote:<br>
> > >> >> ><br>
> > >> >> > > Hi Abhishek,<br>
> > >> >> > ><br>
> > >> >> > > >>Â I'll provide the required log to you.<br>
> > >> >> > ><br>
> > >> >> > > sure<br>
> > >> >> > ><br>
> > >> >> > > on both node. do "pkill glusterd" and then start glusterd<br>
> > services.<br>
> > >> >> > ><br>
> > >> >> > > Thanks,<br>
> > >> >> > ><br>
> > >> >> > > ~Gaurav<br>
> > >> >> > ><br>
> > >> >> > > ----- Original Message -----<br>
> > >> >> > > From: "ABHISHEK PALIWAL" <<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>><br>
> > >> >> > > To: "Gaurav Garg" <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>><br>
> > >> >> > > Cc: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
> > >> >> > > Sent: Monday, February 22, 2016 10:11:48 AM<br>
> > >> >> > > Subject: Re: [Gluster-users] Issue in Adding/Removing the<br>
> gluster<br>
> > >> node<br>
> > >> >> > ><br>
> > >> >> > > Hi Gaurav,<br>
> > >> >> > ><br>
> > >> >> > > Thanks for your prompt reply.<br>
> > >> >> > ><br>
> > >> >> > > I'll provide the required log to you.<br>
> > >> >> > ><br>
> > >> >> > > As a workaround you suggested that restart the glusterd<br>
> service.<br>
> > >> Could<br>
> > >> >> > you<br>
> > >> >> > > please tell me the point where I can do this?<br>
> > >> >> > ><br>
> > >> >> > > Regards,<br>
> > >> >> > > Abhishek<br>
> > >> >> > ><br>
> > >> >> > > On Fri, Feb 19, 2016 at 6:11 PM, Gaurav Garg <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a><br>
> ><br>
> > >> >> wrote:<br>
> > >> >> > ><br>
> > >> >> > > > Hi Abhishek,<br>
> > >> >> > > ><br>
> > >> >> > > > Peer status output looks interesting where it have stale<br>
> entry,<br>
> > >> >> > > > technically it should not happen. Here few thing need to ask<br>
> > >> >> > > ><br>
> > >> >> > > > Did you perform any manual operation with GlusterFS<br>
> > configuration<br>
> > >> >> file<br>
> > >> >> > > > which resides in /var/lib/glusterd/* folder.<br>
> > >> >> > > ><br>
> > >> >> > > > Can you provide output of "ls /var/lib/glusterd/peers"Â from<br>
> > >> both of<br>
> > >> >> > your<br>
> > >> >> > > > nodes.<br>
> > >> >> > > ><br>
> > >> >> > > > Could you provide output of #gluster peer status command when<br>
> > 2nd<br>
> > >> >> node<br>
> > >> >> > is<br>
> > >> >> > > > down<br>
> > >> >> > > ><br>
> > >> >> > > > Can you provide output of #gluster volume info command<br>
> > >> >> > > ><br>
> > >> >> > > > Can you provide full logs details of cmd_history.log and<br>
> > >> >> > > > etc-glusterfs-glusterd.vol.log from both the nodes.<br>
> > >> >> > > ><br>
> > >> >> > > ><br>
> > >> >> > > > You can restart your glusterd as of now as a workaround but<br>
> we<br>
> > >> need<br>
> > >> >> to<br>
> > >> >> > > > analysis this issue further.<br>
> > >> >> > > ><br>
> > >> >> > > > Thanks,<br>
> > >> >> > > > Gaurav<br>
> > >> >> > > ><br>
> > >> >> > > > ----- Original Message -----<br>
> > >> >> > > > From: "ABHISHEK PALIWAL" <<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>><br>
> > >> >> > > > To: "Gaurav Garg" <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>><br>
> > >> >> > > > Cc: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
> > >> >> > > > Sent: Friday, February 19, 2016 5:27:21 PM<br>
> > >> >> > > > Subject: Re: [Gluster-users] Issue in Adding/Removing the<br>
> > gluster<br>
> > >> >> node<br>
> > >> >> > > ><br>
> > >> >> > > > Hi Gaurav,<br>
> > >> >> > > ><br>
> > >> >> > > > After the failure of add-brick following is outcome "gluster<br>
> > peer<br>
> > >> >> > status"<br>
> > >> >> > > > command<br>
> > >> >> > > ><br>
> > >> >> > > > Number of Peers: 2<br>
> > >> >> > > ><br>
> > >> >> > > > Hostname: 10.32.1.144<br>
> > >> >> > > > Uuid: bbe2a458-ad3d-406d-b233-b6027c12174e<br>
> > >> >> > > > State: Peer in Cluster (Connected)<br>
> > >> >> > > ><br>
> > >> >> > > > Hostname: 10.32.1.144<br>
> > >> >> > > > Uuid: bbe2a458-ad3d-406d-b233-b6027c12174e<br>
> > >> >> > > > State: Peer in Cluster (Connected)<br>
> > >> >> > > ><br>
> > >> >> > > > Regards,<br>
> > >> >> > > > Abhishek<br>
> > >> >> > > ><br>
> > >> >> > > > On Fri, Feb 19, 2016 at 5:21 PM, ABHISHEK PALIWAL <<br>
> > >> >> > > <a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a><br>
> > >> >> > > > ><br>
> > >> >> > > > wrote:<br>
> > >> >> > > ><br>
> > >> >> > > > > Hi Gaurav,<br>
> > >> >> > > > ><br>
> > >> >> > > > > Both are the board connect through the backplane using<br>
> > >> ethernet.<br>
> > >> >> > > > ><br>
> > >> >> > > > > Even this inconsistency also occurs when I am trying to<br>
> > >> bringing<br>
> > >> >> back<br>
> > >> >> > > the<br>
> > >> >> > > > > node in slot. Means some time add-brick executes without<br>
> > >> failure<br>
> > >> >> but<br>
> > >> >> > > some<br>
> > >> >> > > > > time following error occurs.<br>
> > >> >> > > > ><br>
> > >> >> > > > > volume add-brick c_glusterfs replica 2 <a href="http://10.32.1.144" rel="noreferrer" target="_blank">10.32.1.144</a>:<br>
> > >> >> > > /opt/lvmdir/c2/brick<br>
> > >> >> > > > > force : FAILED : Another transaction is in progress for<br>
> > >> >> c_glusterfs.<br>
> > >> >> > > > Please<br>
> > >> >> > > > > try again after sometime.<br>
> > >> >> > > > ><br>
> > >> >> > > > ><br>
> > >> >> > > > > You can also see the attached logs for add-brick failure<br>
> > >> scenario.<br>
> > >> >> > > > ><br>
> > >> >> > > > > Please let me know if you need more logs.<br>
> > >> >> > > > ><br>
> > >> >> > > > > Regards,<br>
> > >> >> > > > > Abhishek<br>
> > >> >> > > > ><br>
> > >> >> > > > ><br>
> > >> >> > > > > On Fri, Feb 19, 2016 at 5:03 PM, Gaurav Garg <<br>
> > <a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a><br>
> > >> ><br>
> > >> >> > wrote:<br>
> > >> >> > > > ><br>
> > >> >> > > > >> Hi Abhishek,<br>
> > >> >> > > > >><br>
> > >> >> > > > >> How are you connecting two board, and how are you removing<br>
> > it<br>
> > >> >> > manually<br>
> > >> >> > > > >> that need to know because if you are removing your 2nd<br>
> board<br>
> > >> from<br>
> > >> >> > the<br>
> > >> >> > > > >> cluster (abrupt shutdown) then you can't perform remove<br>
> > brick<br>
> > >> >> > > operation<br>
> > >> >> > > > in<br>
> > >> >> > > > >> 2nd node from first node and its happening successfully in<br>
> > >> your<br>
> > >> >> > case.<br>
> > >> >> > > > could<br>
> > >> >> > > > >> you ensure your network connection once again while<br>
> removing<br>
> > >> and<br>
> > >> >> > > > bringing<br>
> > >> >> > > > >> back your node again.<br>
> > >> >> > > > >><br>
> > >> >> > > > >> Thanks,<br>
> > >> >> > > > >> Gaurav<br>
> > >> >> > > > >><br>
> > >> >> > > > >> ------------------------------<br>
> > >> >> > > > >> *From: *"ABHISHEK PALIWAL" <<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>><br>
> > >> >> > > > >> *To: *"Gaurav Garg" <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>><br>
> > >> >> > > > >> *Cc: *<a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
> > >> >> > > > >> *Sent: *Friday, February 19, 2016 3:36:21 PM<br>
> > >> >> > > > >><br>
> > >> >> > > > >> *Subject: *Re: [Gluster-users] Issue in Adding/Removing<br>
> the<br>
> > >> >> gluster<br>
> > >> >> > > node<br>
> > >> >> > > > >><br>
> > >> >> > > > >> Hi Gaurav,<br>
> > >> >> > > > >><br>
> > >> >> > > > >> Thanks for reply<br>
> > >> >> > > > >><br>
> > >> >> > > > >> 1. Here, I removed the board manually here but this time<br>
> it<br>
> > >> works<br>
> > >> >> > fine<br>
> > >> >> > > > >><br>
> > >> >> > > > >> [2016-02-18 10:03:40.601472]Â : volume remove-brick<br>
> > >> c_glusterfs<br>
> > >> >> > > replica<br>
> > >> >> > > > 1<br>
> > >> >> > > > >> 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS<br>
> > >> >> > > > >> [2016-02-18 10:03:40.885973]Â : peer detach 10.32.1.144 :<br>
> > >> SUCCESS<br>
> > >> >> > > > >><br>
> > >> >> > > > >> Yes this time board is reachable but how? don't know<br>
> because<br>
> > >> >> board<br>
> > >> >> > is<br>
> > >> >> > > > >> detached.<br>
> > >> >> > > > >><br>
> > >> >> > > > >> 2. Here, I attached the board this time its works fine in<br>
> > >> >> add-bricks<br>
> > >> >> > > > >><br>
> > >> >> > > > >> 2016-02-18 10:03:42.065038]Â : peer probe 10.32.1.144 :<br>
> > >> SUCCESS<br>
> > >> >> > > > >> [2016-02-18 10:03:44.563546]Â : volume add-brick<br>
> c_glusterfs<br>
> > >> >> > replica 2<br>
> > >> >> > > > >> 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS<br>
> > >> >> > > > >><br>
> > >> >> > > > >> 3.Here, again I removed the board this time failed occur<br>
> > >> >> > > > >><br>
> > >> >> > > > >> [2016-02-18 10:37:02.816089]Â : volume remove-brick<br>
> > >> c_glusterfs<br>
> > >> >> > > replica<br>
> > >> >> > > > 1<br>
> > >> >> > > > >> 10.32.1.144:/opt/lvmdir/c2/brick force : FAILED :<br>
> Incorrect<br>
> > >> >> brick<br>
> > >> >> > > > >> 10.32.1.144:/opt<br>
> > >> >> > > > >> /lvmdir/c2/brick for volume c_glusterfs<br>
> > >> >> > > > >><br>
> > >> >> > > > >> but here board is not reachable.<br>
> > >> >> > > > >><br>
> > >> >> > > > >> why this inconsistency is there while doing the same step<br>
> > >> >> multiple<br>
> > >> >> > > time.<br>
> > >> >> > > > >><br>
> > >> >> > > > >> Hope you are getting my point.<br>
> > >> >> > > > >><br>
> > >> >> > > > >> Regards,<br>
> > >> >> > > > >> Abhishek<br>
> > >> >> > > > >><br>
> > >> >> > > > >> On Fri, Feb 19, 2016 at 3:25 PM, Gaurav Garg <<br>
> > >> <a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>><br>
> > >> >> > > wrote:<br>
> > >> >> > > > >><br>
> > >> >> > > > >>> Abhishek,<br>
> > >> >> > > > >>><br>
> > >> >> > > > >>> when sometime its working fine means 2nd board network<br>
> > >> >> connection<br>
> > >> >> > is<br>
> > >> >> > > > >>> reachable to first node. you can conform this by<br>
> executing<br>
> > >> same<br>
> > >> >> > > > #gluster<br>
> > >> >> > > > >>> peer status command.<br>
> > >> >> > > > >>><br>
> > >> >> > > > >>> Thanks,<br>
> > >> >> > > > >>> Gaurav<br>
> > >> >> > > > >>><br>
> > >> >> > > > >>> ----- Original Message -----<br>
> > >> >> > > > >>> From: "ABHISHEK PALIWAL" <<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>><br>
> > >> >> > > > >>> To: "Gaurav Garg" <<a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>><br>
> > >> >> > > > >>> Cc: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
> > >> >> > > > >>> Sent: Friday, February 19, 2016 3:12:22 PM<br>
> > >> >> > > > >>> Subject: Re: [Gluster-users] Issue in Adding/Removing the<br>
> > >> >> gluster<br>
> > >> >> > > node<br>
> > >> >> > > > >>><br>
> > >> >> > > > >>> Hi Gaurav,<br>
> > >> >> > > > >>><br>
> > >> >> > > > >>> Yes, you are right actually I am force fully detaching<br>
> the<br>
> > >> node<br>
> > >> >> > from<br>
> > >> >> > > > the<br>
> > >> >> > > > >>> slave and when we removed the board it disconnected from<br>
> > the<br>
> > >> >> > another<br>
> > >> >> > > > >>> board.<br>
> > >> >> > > > >>><br>
> > >> >> > > > >>> but my question is I am doing this process multiple time<br>
> > some<br>
> > >> >> time<br>
> > >> >> > it<br>
> > >> >> > > > >>> works<br>
> > >> >> > > > >>> fine but some time it gave these errors.<br>
> > >> >> > > > >>><br>
> > >> >> > > > >>><br>
> > >> >> > > > >>> you can see the following logs from cmd_history.log file<br>
> > >> >> > > > >>><br>
> > >> >> > > > >>> [2016-02-18 10:03:34.497996]Â : volume set c_glusterfs<br>
> > >> >> nfs.disable<br>
> > >> >> > > on :<br>
> > >> >> > > > >>> SUCCESS<br>
> > >> >> > > > >>> [2016-02-18 10:03:34.915036]Â : volume start c_glusterfs<br>
> > >> force :<br>
> > >> >> > > > SUCCESS<br>
> > >> >> > > > >>> [2016-02-18 10:03:40.250326]Â : volume status : SUCCESS<br>
> > >> >> > > > >>> [2016-02-18 10:03:40.273275]Â : volume status : SUCCESS<br>
> > >> >> > > > >>> [2016-02-18 10:03:40.601472]Â : volume remove-brick<br>
> > >> c_glusterfs<br>
> > >> >> > > > replica 1<br>
> > >> >> > > > >>> 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS<br>
> > >> >> > > > >>> [2016-02-18 10:03:40.885973]Â : peer detach 10.32.1.144 :<br>
> > >> >> SUCCESS<br>
> > >> >> > > > >>> [2016-02-18 10:03:42.065038]Â : peer probe 10.32.1.144 :<br>
> > >> SUCCESS<br>
> > >> >> > > > >>> [2016-02-18 10:03:44.563546]Â : volume add-brick<br>
> > c_glusterfs<br>
> > >> >> > replica<br>
> > >> >> > > 2<br>
> > >> >> > > > >>> 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS<br>
> > >> >> > > > >>> [2016-02-18 10:30:53.297415]Â : volume status : SUCCESS<br>
> > >> >> > > > >>> [2016-02-18 10:30:53.313096]Â : volume status : SUCCESS<br>
> > >> >> > > > >>> [2016-02-18 10:37:02.748714]Â : volume status : SUCCESS<br>
> > >> >> > > > >>> [2016-02-18 10:37:02.762091]Â : volume status : SUCCESS<br>
> > >> >> > > > >>> [2016-02-18 10:37:02.816089]Â : volume remove-brick<br>
> > >> c_glusterfs<br>
> > >> >> > > > replica 1<br>
> > >> >> > > > >>> 10.32.1.144:/opt/lvmdir/c2/brick force : FAILED :<br>
> > Incorrect<br>
> > >> >> brick<br>
> > >> >> > > > >>> 10.32.1.144:/opt/lvmdir/c2/brick for volume c_glusterfs<br>
> > >> >> > > > >>><br>
> > >> >> > > > >>><br>
> > >> >> > > > >>> On Fri, Feb 19, 2016 at 3:05 PM, Gaurav Garg <<br>
> > >> <a href="mailto:ggarg@redhat.com">ggarg@redhat.com</a>><br>
> > >> >> > > wrote:<br>
> > >> >> > > > >>><br>
> > >> >> > > > >>> > Hi Abhishek,<br>
> > >> >> > > > >>> ><br>
> > >> >> > > > >>> > Seems your peer 10.32.1.144 have disconnected while<br>
> doing<br>
> > >> >> remove<br>
> > >> >> > > > brick.<br>
> > >> >> > > > >>> > see the below logs in glusterd:<br>
> > >> >> > > > >>> ><br>
> > >> >> > > > >>> > [2016-02-18 10:37:02.816009] E [MSGID: 106256]<br>
> > >> >> > > > >>> ><br>
> > [glusterd-brick-ops.c:1047:__glusterd_handle_remove_brick]<br>
> > >> >> > > > >>> 0-management:<br>
> > >> >> > > > >>> > Incorrect brick 10.32.1.144:/opt/lvmdir/c2/brick for<br>
> > >> volume<br>
> > >> >> > > > >>> c_glusterfs<br>
> > >> >> > > > >>> > [Invalid argument]<br>
> > >> >> > > > >>> > [2016-02-18 10:37:02.816061] E [MSGID: 106265]<br>
> > >> >> > > > >>> ><br>
> > [glusterd-brick-ops.c:1088:__glusterd_handle_remove_brick]<br>
> > >> >> > > > >>> 0-management:<br>
> > >> >> > > > >>> > Incorrect brick 10.32.1.144:/opt/lvmdir/c2/brick for<br>
> > >> volume<br>
> > >> >> > > > >>> c_glusterfs<br>
> > >> >> > > > >>> > The message "I [MSGID: 106004]<br>
> > >> >> > > > >>> > [glusterd-handler.c:5065:__glusterd_peer_rpc_notify]<br>
> > >> >> > 0-management:<br>
> > >> >> > > > Peer<br>
> > >> >> > > > >>> > <10.32.1.144> (<6adf57dc-c619-4e56-ae40-90e6aef75fe9>),<br>
> > in<br>
> > >> >> state<br>
> > >> >> > > > <Peer<br>
> > >> >> > > > >>> in<br>
> > >> >> > > > >>> > Cluster>, has disconnected from glusterd." repeated 25<br>
> > >> times<br>
> > >> >> > > between<br>
> > >> >> > > > >>> > [2016-02-18 10:35:43.131945] and [2016-02-18<br>
> > >> 10:36:58.160458]<br>
> > >> >> > > > >>> ><br>
> > >> >> > > > >>> ><br>
> > >> >> > > > >>> ><br>
> > >> >> > > > >>> > If you are facing the same issue now, could you paste<br>
> > your<br>
> > >> #<br>
> > >> >> > > gluster<br>
> > >> >> > > > >>> peer<br>
> > >> >> > > > >>> > status   command output here.<br>
> > >> >> > > > >>> ><br>
> > >> >> > > > >>> > Thanks,<br>
> > >> >> > > > >>> > ~Gaurav<br>
> > >> >> > > > >>> ><br>
> > >> >> > > > >>> > ----- Original Message -----<br>
> > >> >> > > > >>> > From: "ABHISHEK PALIWAL" <<a href="mailto:abhishpaliwal@gmail.com">abhishpaliwal@gmail.com</a>><br>
> > >> >> > > > >>> > To: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a><br>
> > >> >> > > > >>> > Sent: Friday, February 19, 2016 2:46:35 PM<br>
> > >> >> > > > >>> > Subject: [Gluster-users] Issue in Adding/Removing the<br>
> > >> gluster<br>
> > >> >> > node<br>
> > >> >> > > > >>> ><br>
> > >> >> > > > >>> > Hi,<br>
> > >> >> > > > >>> ><br>
> > >> >> > > > >>> ><br>
> > >> >> > > > >>> > I am working on two board setup connecting to each<br>
> other.<br>
> > >> >> Gluster<br>
> > >> >> > > > >>> version<br>
> > >> >> > > > >>> > 3.7.6 is running and added two bricks in replica 2 mode<br>
> > but<br>
> > >> >> when<br>
> > >> >> > I<br>
> > >> >> > > > >>> manually<br>
> > >> >> > > > >>> > removed (detach) the one board from the setup I am<br>
> > getting<br>
> > >> the<br>
> > >> >> > > > >>> following<br>
> > >> >> > > > >>> > error.<br>
> > >> >> > > > >>> ><br>
> > >> >> > > > >>> > volume remove-brick c_glusterfs replica 1 <a href="http://10.32.1.144" rel="noreferrer" target="_blank">10.32.1.144</a>:<br>
> > >> >> > > > >>> /opt/lvmdir/c2/brick<br>
> > >> >> > > > >>> > force : FAILED : Incorrect brick <a href="http://10.32.1.144" rel="noreferrer" target="_blank">10.32.1.144</a>:<br>
> > >> >> > /opt/lvmdir/c2/brick<br>
> > >> >> > > > for<br>
> > >> >> > > > >>> > volume c_glusterfs<br>
> > >> >> > > > >>> ><br>
> > >> >> > > > >>> > Please find the logs file as an attachment.<br>
> > >> >> > > > >>> ><br>
> > >> >> > > > >>> ><br>
> > >> >> > > > >>> > Regards,<br>
> > >> >> > > > >>> > Abhishek<br>
> > >> >> > > > >>> ><br>
> > >> >> > > > >>> ><br>
> > >> >> > > > >>> > _______________________________________________<br>
> > >> >> > > > >>> > Gluster-users mailing list<br>
> > >> >> > > > >>> > <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> > >> >> > > > >>> > <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
> > >> >> > > > >>> ><br>
> > >> >> > > > >>><br>
> > >> >> > > > >>><br>
> > >> >> > > > >>><br>
> > >> >> > > > >>> --<br>
> > >> >> > > > >>><br>
> > >> >> > > > >>><br>
> > >> >> > > > >>><br>
> > >> >> > > > >>><br>
> > >> >> > > > >>> Regards<br>
> > >> >> > > > >>> Abhishek Paliwal<br>
> > >> >> > > > >>><br>
> > >> >> > > > >><br>
> > >> >> > > > >><br>
> > >> >> > > > >><br>
> > >> >> > > > >> --<br>
> > >> >> > > > >><br>
> > >> >> > > > >><br>
> > >> >> > > > >><br>
> > >> >> > > > >><br>
> > >> >> > > > >> Regards<br>
> > >> >> > > > >> Abhishek Paliwal<br>
> > >> >> > > > >><br>
> > >> >> > > > >><br>
> > >> >> > > > ><br>
> > >> >> > > > ><br>
> > >> >> > > > ><br>
> > >> >> > > > ><br>
> > >> >> > > ><br>
> > >> >> > > ><br>
> > >> >> > > > --<br>
> > >> >> > > ><br>
> > >> >> > > ><br>
> > >> >> > > ><br>
> > >> >> > > ><br>
> > >> >> > > > Regards<br>
> > >> >> > > > Abhishek Paliwal<br>
> > >> >> > > ><br>
> > >> >> > ><br>
> > >> >> ><br>
> > >> >> ><br>
> > >> >> ><br>
> > >> >> > --<br>
> > >> >> ><br>
> > >> >> ><br>
> > >> >> ><br>
> > >> >> ><br>
> > >> >> > Regards<br>
> > >> >> > Abhishek Paliwal<br>
> > >> >> ><br>
> > >> >><br>
> > >> >><br>
> > >> >><br>
> > >> >> --<br>
> > >> >><br>
> > >> >><br>
> > >> >><br>
> > >> >><br>
> > >> >> Regards<br>
> > >> >> Abhishek Paliwal<br>
> > >> >><br>
> > >> ><br>
> > >> ><br>
> > >> ><br>
> > >> > --<br>
> > >> ><br>
> > >> ><br>
> > >> ><br>
> > >> ><br>
> > >> > Regards<br>
> > >> > Abhishek Paliwal<br>
> > >> ><br>
> > >><br>
> > >><br>
> > >><br>
> > >> --<br>
> > >><br>
> > >><br>
> > >><br>
> > >><br>
> > >> Regards<br>
> > >> Abhishek Paliwal<br>
> > >><br>
> > ><br>
> > ><br>
> > ><br>
> > > --<br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> > > Regards<br>
> > > Abhishek Paliwal<br>
> > ><br>
> ><br>
> ><br>
> ><br>
> > --<br>
> ><br>
> ><br>
> ><br>
> ><br>
> > Regards<br>
> > Abhishek Paliwal<br>
> ><br>
><br>
><br>
><br>
> --<br>
><br>
><br>
><br>
><br>
> Regards<br>
> Abhishek Paliwal<br>
><br>
<br>
<br>
<br>
--<br>
<br>
<br>
<br>
<br>
Regards<br>
Abhishek Paliwal<br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature"><div dir="ltr"><br><br><br><br>Regards<br>
Abhishek Paliwal<br>
</div></div>
</div>