<div dir="ltr"><div><div><div><div>Hi,<br><br></div>No. I did run add-brick on a volume with the same configuration as that of Kevin, while IO was running, except<br></div>that I wasn't running VM workload. I compared the file checksums wrt the original src files from which they were copied<br></div>and they matched.<br><br><br></div><div>@Kevin,<br><br></div><div>I see that network.ping-timeout on your setup is 15 seconds and that's too low. Could you reconfigure that to 30 seconds?<br><br></div><div>-Krutika<br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Oct 14, 2016 at 9:07 PM, David Gossage <span dir="ltr"><<a href="mailto:dgossage@carouselchecks.com" target="_blank">dgossage@carouselchecks.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Sorry to resurrect an old email but did any resolution occur for this or a cause found? I just see this as a potential task I may need to also run through some day and if their are pitfalls to watch for would be good to know.</div><div class="gmail_extra"><span class="HOEnZb"><font color="#888888"><br clear="all"><div><div class="m_-6422857459602100065gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><span><font color="#888888"><span style="color:rgb(0,0,0)"><b><i>David Gossage</i></b></span><font><i><span style="color:rgb(51,51,51)"><b><br>
</b></span></i></font></font></span><div><span><font color="#888888"><font><i><span style="color:rgb(51,51,51)"></span></i><font size="1"><b style="color:rgb(153,0,0)">Carousel Checks Inc.<span style="color:rgb(204,204,204)"> | System Administrator</span></b></font></font><font style="color:rgb(153,153,153)"><font size="1"><br>
</font></font><font><font size="1"><span style="color:rgb(51,51,51)"><b style="color:rgb(153,153,153)">Office</b><span style="color:rgb(153,153,153)"> <a value="+17086132426">708.613.2284<font color="#888888"><font size="1"><br></font></font></a></span></span></font></font></font></span></div></div></div></div></font></span><div><div class="h5">
<br><div class="gmail_quote">On Tue, Sep 6, 2016 at 5:38 AM, Kevin Lemonnier <span dir="ltr"><<a href="mailto:lemonnierk@ulrar.net" target="_blank">lemonnierk@ulrar.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br>
<br>
Here is the info :<br>
<br>
Volume Name: VMs<br>
Type: Replicate<br>
Volume ID: c5272382-d0c8-4aa4-aced-dd25a0<wbr>64e45c<br>
Status: Started<br>
Number of Bricks: 1 x 3 = 3<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: ips4adm.name:/mnt/storage/VMs<br>
Brick2: ips5adm.name:/mnt/storage/VMs<br>
Brick3: ips6adm.name:/mnt/storage/VMs<br>
Options Reconfigured:<br>
performance.readdir-ahead: on<br>
cluster.quorum-type: auto<br>
cluster.server-quorum-type: server<br>
network.remote-dio: enable<br>
cluster.eager-lock: enable<br>
performance.quick-read: off<br>
performance.read-ahead: off<br>
performance.io-cache: off<br>
performance.stat-prefetch: off<br>
features.shard: on<br>
features.shard-block-size: 64MB<br>
cluster.data-self-heal-algorit<wbr>hm: full<br>
network.ping-timeout: 15<br>
<br>
<br>
For the logs I'm sending that over to you in private.<br>
<br>
<br>
On Tue, Sep 06, 2016 at 09:48:07AM +0530, Krutika Dhananjay wrote:<br>
> Could you please attach the glusterfs client and brick logs?<br>
> Also provide output of `gluster volume info`.<br>
> -Krutika<br>
> On Tue, Sep 6, 2016 at 4:29 AM, Kevin Lemonnier <<a href="mailto:lemonnierk@ulrar.net" target="_blank">lemonnierk@ulrar.net</a>><br>
> wrote:<br>
><br>
> >A A - What was the original (and current) geometry? (status and info)<br>
><br>
> It was a 1x3 that I was trying to bump to 2x3.<br>
> >A A - what parameters did you use when adding the bricks?<br>
> ><br>
><br>
> Just a simple add-brick node1:/path node2:/path node3:/path<br>
> Then a fix-layout when everything started going wrong.<br>
><br>
> I was able to salvage some VMs by stopping them then starting them<br>
> again,<br>
> but most won't start for various reasons (disk corrupted, grub not found<br>
> ...).<br>
> For those we are deleting the disks then importing them from backups,<br>
> that's<br>
> a huge loss but everything has been down for so long, no choice ..<br>
> >A A On 6/09/2016 8:00 AM, Kevin Lemonnier wrote:<br>
> ><br>
> >A I tried a fix-layout, and since that didn't work I removed the brick<br>
> (start then commit when it showed<br>
> >A completed). Not better, the volume is now running on the 3 original<br>
> bricks (replica 3) but the VMs<br>
> >A are still corrupted. I have 880 Mb of shards on the bricks I removed<br>
> for some reason, thos shards do exist<br>
> >A (and are bigger) on the "live" volume. I don't understand why now<br>
> that I have removed the new bricks<br>
> >A everything isn't working like before ..<br>
> ><br>
> >A On Mon, Sep 05, 2016 at 11:06:16PM +0200, Kevin Lemonnier wrote:<br>
> ><br>
> >A Hi,<br>
> ><br>
> >A I just added 3 bricks to a volume and all the VMs are doing I/O<br>
> errors now.<br>
> >A I rebooted a VM to see and it can't start again, am I missing<br>
> something ? Is the reblance required<br>
> >A to make everything run ?<br>
> ><br>
> >A That's urgent, thanks.<br>
> ><br>
> >A --<br>
> >A Kevin Lemonnier<br>
> >A PGP Fingerprint : 89A5 2283 04A0 E6E9 0111<br>
> ><br>
> ><br>
> ><br>
> ><br>
> >A ______________________________<wbr>_________________<br>
> >A Gluster-users mailing list<br>
> >A <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
> >A <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman<wbr>/listinfo/gluster-users</a><br>
> ><br>
> ><br>
> ><br>
> >A ______________________________<wbr>_________________<br>
> >A Gluster-users mailing list<br>
> >A <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
> >A <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman<wbr>/listinfo/gluster-users</a><br>
> ><br>
> >A --<br>
> >A Lindsay Mathieson<br>
><br>
> > ______________________________<wbr>_________________<br>
> > Gluster-users mailing list<br>
> > <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
> > <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman<wbr>/listinfo/gluster-users</a><br>
><br>
> --<br>
> Kevin Lemonnier<br>
> PGP Fingerprint : 89A5 2283 04A0 E6E9 0111<br>
> ______________________________<wbr>_________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
> <a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman<wbr>/listinfo/gluster-users</a><br>
<span class="m_-6422857459602100065HOEnZb"><font color="#888888"><br>
--<br>
Kevin Lemonnier<br>
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111<br>
</font></span><br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/mailman<wbr>/listinfo/gluster-users</a><br></blockquote></div><br></div></div></div>
<br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://www.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div><br></div>