<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Dec 14, 2016 at 1:34 PM, Miloš Čučulović - MDPI <span dir="ltr">&lt;<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Atin,<br>
<br>
I was able to move forward a bit. Initially, I had this:<br>
<br>
sudo gluster peer status<br>
Number of Peers: 1<br>
<br>
Hostname: storage2<br>
Uuid: 32bef70a-9e31-403e-b9f3-ec9e1b<wbr>d162ad<br>
State: Peer Rejected (Connected)<br>
<br>
Then, on storage2 I removed all from /var/lib/glusterd except the info file.<br>
<br>
Now I am getting another error message:<br>
<br>
sudo gluster peer status<br>
Number of Peers: 1<br>
<br>
Hostname: storage2<br>
Uuid: 32bef70a-9e31-403e-b9f3-ec9e1b<wbr>d162ad<br>
State: Sent and Received peer request (Connected)<br></blockquote><div><br><br></div><div>Please edit /var/lib/glusterd/peers/32bef70a-9e31-403e-b9f3-ec9e1b<wbr>d162ad file and set the state to 3 in storage1 and restart glusterd instance.<br><br> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
But the add brick is still not working. I checked the hosts file and all seems ok, ping is also working well.<br>
<br>
The think I also need to know, when adding a new replicated brick, do I need to first sync all files, or the new brick server needs to be empty? Also, do I first need to create the same volume on the new server or adding it to the volume of server1 will do it automatically?<span class="gmail-"><br>
<br>
<br>
- Kindest regards,<br>
<br>
Milos Cuculovic<br>
IT Manager<br>
<br>
---<br>
MDPI AG<br>
Postfach, CH-4020 Basel, Switzerland<br>
Office: St. Alban-Anlage 66, 4052 Basel, Switzerland<br>
Tel. +41 61 683 77 35<br>
Fax +41 61 302 89 18<br>
Email: <a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a><br>
Skype: milos.cuculovic.mdpi<br>
<br></span><span class="gmail-">
On 14.12.2016 05:13, Atin Mukherjee wrote:<br>
</span><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span class="gmail-">
Milos,<br>
<br>
I just managed to take a look into a similar issue and my analysis is at<br>
[1]. I remember you mentioning about some incorrect /etc/hosts entries<br>
which lead to this same problem in earlier case, do you mind to recheck<br>
the same?<br>
<br>
[1]<br>
<a href="http://www.gluster.org/pipermail/gluster-users/2016-December/029443.html" rel="noreferrer" target="_blank">http://www.gluster.org/piperma<wbr>il/gluster-users/2016-December<wbr>/029443.html</a><br>
<br>
On Wed, Dec 14, 2016 at 2:57 AM, Miloš Čučulović - MDPI<br></span><div><div class="gmail-h5">
&lt;<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>&gt;&gt; wrote:<br>
<br>
    Hi All,<br>
<br>
    Moving forward with my issue, sorry for the late reply!<br>
<br>
    I had some issues with the storage2 server (original volume), then<br>
    decided to use 3.9.0, si I have the latest version.<br>
<br>
    For that, I synced manually all the files to the storage server. I<br>
    installed there gluster 3.9.0, started it, created new volume called<br>
    storage and all seems to work ok.<br>
<br>
    Now, I need to create my replicated volume (add new brick on<br>
    storage2 server). Almost all the files are there. So, I was adding<br>
    on storage server:<br>
<br>
    * sudo gluter peer probe storage2<br>
    * sudo gluster volume add-brick storage replica 2<br>
    storage2:/data/data-cluster force<br>
<br>
    But there I am receiving &quot;volume add-brick: failed: Host storage2 is<br>
    not in &#39;Peer in Cluster&#39; state&quot;<br>
<br>
    Any idea?<br>
<br>
    - Kindest regards,<br>
<br>
    Milos Cuculovic<br>
    IT Manager<br>
<br>
    ---<br>
    MDPI AG<br>
    Postfach, CH-4020 Basel, Switzerland<br>
    Office: St. Alban-Anlage 66, 4052 Basel, Switzerland<br>
    Tel. +41 61 683 77 35<br>
    Fax +41 61 302 89 18<br></div></div><span class="gmail-">
    Email: <a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>&gt;<br>
    Skype: milos.cuculovic.mdpi<br>
<br></span><span class="gmail-">
    On 08.12.2016 17:52, Ravishankar N wrote:<br>
<br>
        On 12/08/2016 09:44 PM, Miloš Čučulović - MDPI wrote:<br>
<br>
            I was able to fix the sync by rsync-ing all the directories,<br>
            then the<br>
            hale started. The next problem :), as soon as there are<br>
            files on the<br>
            new brick, the gluster mount will render also this one for<br>
            mounts, and<br>
            the new brick is not ready yet, as the sync is not yet done,<br>
            so it<br>
            results on missing files on client side. I temporary removed<br>
            the new<br>
            brick, now I am running a manual rsync and will add the<br>
            brick again,<br>
            hope this could work.<br>
<br>
            What mechanism is managing this issue, I guess there is<br>
            something per<br>
            built to make a replica brick available only once the data is<br>
            completely synced.<br>
<br>
        This mechanism was introduced in  3.7.9 or 3.7.10<br>
        (<a href="http://review.gluster.org/#/c/13806/" rel="noreferrer" target="_blank">http://review.gluster.org/#/c<wbr>/13806/</a><br></span>
        &lt;<a href="http://review.gluster.org/#/c/13806/" rel="noreferrer" target="_blank">http://review.gluster.org/#/c<wbr>/13806/</a>&gt;). Before that version, you<span class="gmail-"><br>
        manually needed to set some xattrs on the bricks so that healing<br>
        could<br>
        happen in parallel while the client still would server reads<br>
        from the<br>
        original brick.  I can&#39;t find the link to the doc which<br>
        describes these<br>
        steps for setting xattrs.:-(<br>
<br>
        Calling it a day,<br>
        Ravi<br>
<br>
<br>
            - Kindest regards,<br>
<br>
            Milos Cuculovic<br>
            IT Manager<br>
<br>
            ---<br>
            MDPI AG<br>
            Postfach, CH-4020 Basel, Switzerland<br>
            Office: St. Alban-Anlage 66, 4052 Basel, Switzerland<br>
            Tel. +41 61 683 77 35<br>
            Fax +41 61 302 89 18<br></span><span class="gmail-">
            Email: <a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>&gt;<br>
            Skype: milos.cuculovic.mdpi<br>
<br></span><span class="gmail-">
            On 08.12.2016 16:17, Ravishankar N wrote:<br>
<br>
                On 12/08/2016 06:53 PM, Atin Mukherjee wrote:<br>
<br>
<br>
<br>
                    On Thu, Dec 8, 2016 at 6:44 PM, Miloš Čučulović - MDPI<br>
                    &lt;<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>&gt;<br></span><span class="gmail-">
                    &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a><br>
                    &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>&gt;&gt;&gt; wrote:<br>
<br></span><div><div class="gmail-h5">
                        Ah, damn! I found the issue. On the storage<br>
                    server, the storage2<br>
                        IP address was wrong, I inversed two digits in<br>
                    the /etc/hosts<br>
                        file, sorry for that :(<br>
<br>
                        I was able to add the brick now, I started the<br>
                    heal, but still no<br>
                        data transfer visible.<br>
<br>
                1. Are the files getting created on the new brick though?<br>
                2. Can you provide the output of `getfattr -d -m . -e hex<br>
                /data/data-cluster` on both bricks?<br>
                3. Is it possible to attach gdb to the self-heal daemon<br>
                on the original<br>
                (old) brick and get a backtrace?<br>
                    `gdb -p &lt;pid of self-heal daemon on the orignal brick&gt;`<br>
                     thread apply all bt  --&gt;share this output<br>
                    quit gdb.<br>
<br>
<br>
                -Ravi<br>
<br>
<br>
                    @Ravi/Pranith - can you help here?<br>
<br>
<br>
<br>
                        By doing gluster volume status, I have<br>
<br>
                        Status of volume: storage<br>
                        Gluster process                       TCP Port<br>
                    RDMA Port<br>
                    Online  Pid<br>
                    ------------------------------<wbr>------------------------------<wbr>------------------<br>
<br>
                        Brick storage2:/data/data-cluster     49152     0 Y<br>
                         23101<br>
                        Brick storage:/data/data-cluster      49152     0 Y<br>
                         30773<br>
                        Self-heal Daemon on localhost         N/A<br>
                     N/A Y<br>
                         30050<br>
                        Self-heal Daemon on storage           N/A<br>
                     N/A Y<br>
                         30792<br>
<br>
<br>
                        Any idea?<br>
<br>
                        On storage I have:<br>
                        Number of Peers: 1<br>
<br>
                        Hostname: 195.65.194.217<br>
                        Uuid: 7c988af2-9f76-4843-8e6f-d94866<wbr>d57bb0<br>
                        State: Peer in Cluster (Connected)<br>
<br>
<br>
                        - Kindest regards,<br>
<br>
                        Milos Cuculovic<br>
                        IT Manager<br>
<br>
                        ---<br>
                        MDPI AG<br>
                        Postfach, CH-4020 Basel, Switzerland<br>
                        Office: St. Alban-Anlage 66, 4052 Basel, Switzerland<br>
                        Tel. +41 61 683 77 35<br>
                        Fax +41 61 302 89 18<br>
                        Email: <a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a><br>
                    &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>&gt;<br></div></div><span class="gmail-">
                    &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>&gt;&gt;<br>
                        Skype: milos.cuculovic.mdpi<br>
<br></span><span class="gmail-">
                        On 08.12.2016 13:55, Atin Mukherjee wrote:<br>
<br>
                            Can you resend the attachment as zip? I am<br>
                    unable to extract<br>
                    the<br>
                            content? We shouldn&#39;t have 0 info file. What<br>
                    does gluster peer<br>
                            status<br>
                            output say?<br>
<br>
                            On Thu, Dec 8, 2016 at 4:51 PM, Miloš<br>
                    Čučulović - MDPI<br>
                            &lt;<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a><br>
                    &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>&gt;<br>
                    &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>&gt;&gt;<br></span><span class="gmail-">
                            &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a><br>
                    &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>&gt;<br>
                    &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a><br></span><span class="gmail-">
                    &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>&gt;&gt;&gt;&gt; wrote:<br>
<br>
                                I hope you received my last email Atin,<br>
                    thank you!<br>
<br>
                                - Kindest regards,<br>
<br>
                                Milos Cuculovic<br>
                                IT Manager<br>
<br>
                                ---<br>
                                MDPI AG<br>
                                Postfach, CH-4020 Basel, Switzerland<br>
                                Office: St. Alban-Anlage 66, 4052 Basel,<br>
                    Switzerland<br>
                                Tel. +41 61 683 77 35<br>
                                Fax +41 61 302 89 18<br>
                                Email: <a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a><br>
                    &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>&gt;<br>
                    &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>&gt;&gt;<br></span><span class="gmail-">
                            &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a><br>
                    &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>&gt;<br>
                    &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>&gt;&gt;&gt;<br></span><div><div class="gmail-h5">
                                Skype: milos.cuculovic.mdpi<br>
<br>
                                On 08.12.2016 10:28, Atin Mukherjee wrote:<br>
<br>
<br>
                                    ---------- Forwarded message ----------<br>
                                    From: *Atin Mukherjee*<br>
                    &lt;<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a> &lt;mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>&gt;<br>
                            &lt;mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a><br>
                    &lt;mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>&gt;&gt;<br>
                                    &lt;mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a><br>
                    &lt;mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>&gt;<br>
                            &lt;mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a><br>
                    &lt;mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>&gt;&gt;&gt;<br>
                    &lt;mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a> &lt;mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>&gt;<br>
                            &lt;mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a><br>
                    &lt;mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>&gt;&gt;<br>
                                    &lt;mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a><br>
                    &lt;mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>&gt;<br>
                            &lt;mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a><br>
                    &lt;mailto:<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>&gt;&gt;&gt;<wbr>&gt;&gt;<br>
                                    Date: Thu, Dec 8, 2016 at 11:56 AM<br>
                                    Subject: Re: [Gluster-users] Replica<br>
                    brick not working<br>
                                    To: Ravishankar N<br>
                    &lt;<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a> &lt;mailto:<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><wbr>&gt;<br>
                            &lt;mailto:<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><br>
                    &lt;mailto:<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><wbr>&gt;&gt;<br>
                                    &lt;mailto:<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><br>
                    &lt;mailto:<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><wbr>&gt;<br>
                            &lt;mailto:<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><br>
                    &lt;mailto:<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><wbr>&gt;&gt;&gt;<br>
                            &lt;mailto:<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><br>
                    &lt;mailto:<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><wbr>&gt;<br>
                    &lt;mailto:<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><br>
                    &lt;mailto:<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><wbr>&gt;&gt;<br>
                                    &lt;mailto:<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><br>
                    &lt;mailto:<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><wbr>&gt;<br>
                            &lt;mailto:<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><br>
                    &lt;mailto:<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><wbr>&gt;&gt;&gt;&gt;&gt;<br>
                                    Cc: Miloš Čučulović - MDPI<br>
                    &lt;<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>&gt;<br>
                            &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a><br>
                    &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>&gt;&gt;<br>
                                    &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a><br>
                    &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>&gt;<br>
                    &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>&gt;&gt;&gt;<br>
                                    &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a><br>
                    &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>&gt;<br>
                    &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a> &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>&gt;&gt;<br>
                            &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a><br>
                    &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>&gt;<br>
                    &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a><br>
                    &lt;mailto:<a href="mailto:cuculovic@mdpi.com" target="_blank">cuculovic@mdpi.com</a>&gt;&gt;&gt;&gt;<wbr>&gt;,<br>
                                    Pranith Kumar Karampuri<br>
                                    &lt;<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a><br>
                    &lt;mailto:<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>&gt;<br>
                    &lt;mailto:<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a><br>
                    &lt;mailto:<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>&gt;&gt;<br>
                            &lt;mailto:<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a><br>
                    &lt;mailto:<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>&gt;<br>
                    &lt;mailto:<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a><br>
                    &lt;mailto:<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>&gt;&gt;&gt;<br>
                                    &lt;mailto:<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a><br>
                    &lt;mailto:<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>&gt;<br>
                            &lt;mailto:<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a><br>
                    &lt;mailto:<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>&gt;&gt;<br>
                    &lt;mailto:<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a> &lt;mailto:<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>&gt;<br>
                            &lt;mailto:<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a><br>
                    &lt;mailto:<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>&gt;&gt;&gt;<wbr>&gt;&gt;,<br>
                                    gluster-users<br>
                                    &lt;<a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a><br>
                    &lt;mailto:<a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.<wbr>org</a>&gt;<br>
                            &lt;mailto:<a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.<wbr>org</a><br>
                    &lt;mailto:<a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.<wbr>org</a>&gt;&gt;<br>
                            &lt;mailto:<a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.<wbr>org</a><br>
                    &lt;mailto:<a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.<wbr>org</a>&gt;<br>
                            &lt;mailto:<a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.<wbr>org</a><br>
                    &lt;mailto:<a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.<wbr>org</a>&gt;&gt;&gt;<br>
                                    &lt;mailto:<a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.<wbr>org</a><br>
                    &lt;mailto:<a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.<wbr>org</a>&gt;<br>
                            &lt;mailto:<a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.<wbr>org</a><br>
                    &lt;mailto:<a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.<wbr>org</a>&gt;&gt;<br>
                                    &lt;mailto:<a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.<wbr>org</a><br>
                    &lt;mailto:<a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.<wbr>org</a>&gt;<br>
                            &lt;mailto:<a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.<wbr>org</a><br>
                    &lt;mailto:<a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.<wbr>org</a>&gt;&gt;&gt;&gt;&gt;<br>
<br>
<br>
<br>
<br>
                                    On Thu, Dec 8, 2016 at 11:11 AM,<br>
                    Ravishankar N<br>
                                    &lt;<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><br>
                    &lt;mailto:<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><wbr>&gt;<br>
                            &lt;mailto:<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><br>
                    &lt;mailto:<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><wbr>&gt;&gt;<br>
                    &lt;mailto:<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><br>
                    &lt;mailto:<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><wbr>&gt;<br>
                            &lt;mailto:<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><br>
                    &lt;mailto:<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><wbr>&gt;&gt;&gt;<br>
                                    &lt;mailto:<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><br>
                    &lt;mailto:<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><wbr>&gt;<br>
                            &lt;mailto:<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><br>
                    &lt;mailto:<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><wbr>&gt;&gt;<br>
                    &lt;mailto:<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><br>
                    &lt;mailto:<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><wbr>&gt;<br>
                            &lt;mailto:<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><br>
                    &lt;mailto:<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a><wbr>&gt;&gt;&gt;&gt;&gt;<br>
<br>
                                    wrote:<br>
<br>
                                        On 12/08/2016 10:43 AM, Atin<br>
                    Mukherjee wrote:<br>
<br>
                                            &gt;From the log snippet:<br>
<br>
                                            [2016-12-07 09:15:35.677645]<br>
                    I [MSGID: 106482]<br>
<br>
<br>
                    [glusterd-brick-ops.c:442:__gl<wbr>usterd_handle_add_brick]<br>
                                            0-management: Received add<br>
                    brick req<br>
                                            [2016-12-07 09:15:35.677708]<br>
                    I [MSGID: 106062]<br>
<br>
<br>
                    [glusterd-brick-ops.c:494:__gl<wbr>usterd_handle_add_brick]<br>
                                            0-management: replica-count is 2<br>
                                            [2016-12-07 09:15:35.677735]<br>
                    E [MSGID: 106291]<br>
<br>
<br>
                    [glusterd-brick-ops.c:614:__gl<wbr>usterd_handle_add_brick]<br>
                                    0-management:<br>
<br>
                                            The last log entry indicates<br>
                    that we hit the<br>
                            code path in<br>
<br>
                    gd_addbr_validate_replica_coun<wbr>t ()<br>
<br>
                                                            if<br>
                    (replica_count ==<br>
                                    volinfo-&gt;replica_count) {<br>
                                                                    if<br>
                    (!(total_bricks %<br>
                                            volinfo-&gt;dist_leaf_count)) {<br>
<br>
                        ret = 1;<br>
<br>
                        goto out;<br>
                                            }<br>
                                                            }<br>
<br>
<br>
                                        It seems unlikely that this<br>
                    snippet was hit<br>
                            because we print<br>
                                    the E<br>
                                        [MSGID: 106291] in the above<br>
                    message only if<br>
                    ret==-1.<br>
<br>
                    gd_addbr_validate_replica_coun<wbr>t() returns -1 and<br>
                            yet not<br>
                                    populates<br>
                                        err_str only when in<br>
                    volinfo-&gt;type doesn&#39;t match<br>
                            any of the<br>
                                    known<br>
                                        volume types, so volinfo-&gt;type<br>
                    is corrupted<br>
                    perhaps?<br>
<br>
<br>
                                    You are right, I missed that ret is<br>
                    set to 1 here in<br>
                            the above<br>
                                    snippet.<br>
<br>
                                    @Milos - Can you please provide us<br>
                    the volume info<br>
                            file from<br>
                                    /var/lib/glusterd/vols/&lt;volnam<wbr>e&gt;/<br>
                    from all the three<br>
                            nodes to<br>
                                    continue<br>
                                    the analysis?<br>
<br>
<br>
<br>
                                        -Ravi<br>
<br>
                                            @Pranith, Ravi - Milos was<br>
                    trying to convert a<br>
                            dist (1 X 1)<br>
                                            volume to a replicate (1 X<br>
                    2) using add brick<br>
                            and hit<br>
                                    this issue<br>
                                            where add-brick failed. The<br>
                    cluster is<br>
                            operating with 3.7.6.<br>
                                            Could you help on what<br>
                    scenario this code path<br>
                            can be<br>
                                    hit? One<br>
                                            straight forward issue I see<br>
                    here is missing<br>
                            err_str in<br>
                                    this path.<br>
<br>
<br>
<br>
<br>
<br>
<br>
                                    --<br>
<br>
                                    ~ Atin (atinm)<br>
<br>
<br>
<br>
                                    --<br>
<br>
                                    ~ Atin (atinm)<br>
<br>
<br>
<br>
<br>
                            --<br>
<br>
                            ~ Atin (atinm)<br>
<br>
<br>
<br>
<br>
                    --<br>
<br>
                    ~ Atin (atinm)<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
--<br>
<br>
~ Atin (atinm)<br>
</div></div></blockquote>
</blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><br></div><div>~ Atin (atinm)<br></div></div></div></div>
</div></div>