<div dir="ltr"><div><div><div><div>Sorry, I didn&#39;t check this, because after reboot my second node I had looked &quot;<span lang="en"><span>gluster volume info</span></span>&quot; only and I have found &quot;Status: Started&quot;.<br></div>Now I&#39;ve checked your recomendation and you are right! <br>&quot;gluster volume start &lt;volname&gt; force&quot; didn&#39;t changed output of &quot;<span lang="en"><span>gluster volume info</span></span>&quot; but I have mounted my share!<br></div>Thank you very much for your advice! <br><br></div></div>But why  does &quot;<span lang="en"><span>gluster volume info</span></span>&quot; show that my volname &quot;started&quot; before &quot;gluster volume start &lt;volname&gt; force&quot;?<br><br><br></div><div class="gmail_extra"><br><div class="gmail_quote">2015-06-18 14:18 GMT+03:00 Ravishankar N <span dir="ltr">&lt;<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
  
    
  
  <div text="#000000" bgcolor="#FFFFFF"><div><div class="h5">
    <br>
    <br>
    <div>On 06/18/2015 04:25 PM, Игорь Бирюлин
      wrote:<br>
    </div>
    <blockquote type="cite">
      <div dir="ltr">
        <div>
          <div>
            <div>
              <div>
                <div>
                  <div>Thank you for you answer!<br>
                    <br>
                  </div>
                  I check recomendation:<br>
                </div>
                <div>1. On first node closed all connection from second
                  node by iptables. Check that on both nodes &quot;gluster
                  peer status&quot; return  &quot;Disconnected&quot;. Check that on
                  both nodes share was mounted and work well like local
                  file system.<br>
                </div>
                2. Rebooted second node (<span lang="en"><span>remind first node closed by
                    iptables). Second node booted without problem and
                    proccesses of glusterfs started:<br>
                    # ps aux | grep [g]luster<br>
                    root      4145  0.0  0.0 375692 16076 ?        Ssl 
                    13:35   0:00 /usr/sbin/glusterd -p
                    /var/run/glusterd.pid<br>
                  </span></span></div>
              <br>
              &quot;gluster peer status&quot; return &quot;Disconnected&quot;<span lang="en"><span> and
                  volume started on localhost:<br>
                  # gluster volume info<br>
                   Volume Name: files<br>
                  Type: Replicate<br>
                  Volume ID: 41067184-d57a-4132-a997-dbd47c974b40<br>
                  Status: Started<br>
                  Number of Bricks: 1 x 2 = 2<br>
                  Transport-type: tcp<br>
                  Bricks:<br>
                  Brick1: xxx1:/storage/gluster_brick_repofiles<br>
                  Brick2: xxx2:/storage/gluster_brick_repofiles<br>
                  <br>
                </span></span></div>
            <span lang="en"><span>Вut
                I cann&#39;t mount this volume:<br>
                # cat /etc/fstab |grep gluster<br>
                127.0.0.1:/files                           
                /repo           glusterfs       rw,_netdev             
                0 0<br>
                # mount /repo<br>
                Mount failed. Please check the log file for more
                details.<br>
                <br>
              </span></span></div>
          <span lang="en"><span>Part
              of log I have sent in first message.<br>
              <br>
            </span></span></div>
        <span lang="en"><span>If I
            will open first node by iptables I could mount without
            problem, but what must I do, when I lost one node and I have
          </span></span><span lang="en"><span>probability reboot another node?<br>
          </span></span>
        <div>
          <div>
            <div>
              <div>
                <div>
                  <div><span lang="en"><span><br>
                        <br>
                      </span></span></div>
                </div>
              </div>
            </div>
          </div>
        </div>
      </div>
    </blockquote></div></div>
    `gluster volume start &lt;volname&gt; force` doesn&#39;t work?<span class=""><br>
    <br>
    <blockquote type="cite">
      <div dir="ltr">
        <div>
          <div>
            <div>
              <div>
                <div>
                  <div><span lang="en"><span></span></span></div>
                </div>
              </div>
            </div>
          </div>
        </div>
      </div>
      <div class="gmail_extra"><br>
        <div class="gmail_quote">2015-06-17 18:46 GMT+03:00 Ravishankar
          N <span dir="ltr">&lt;<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a>&gt;</span>:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span><br>
              <br>
              On 06/17/2015 07:04 PM, Игорь Бирюлин wrote:<br>
              <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
                If we turn off one server, another will be work and
                mounted volume will be use without problem.<br>
                But if we rebooted our another server, when first was
                turned off (or gluster was stopped on this server), our
                volume cann&#39;t mount (glusterd started).<br>
              </blockquote>
            </span>
            If both nodes are down and you bring up only one node,
            glusterd will not start the volume (i.e. the brick, nfs and
            glustershd processes) automatically. It waits for the other
            node&#39;s glusterd also to be up so that they are in sync. You
            can override this behavior by doing a `gluster volume start
            &lt;volname&gt; force` to bring up the gluster process only
            on this node and then mount the volume.<br>
            <br>
            -Ravi<br>
          </blockquote>
        </div>
        <br>
      </div>
    </blockquote>
    <br>
  </span></div>

</blockquote></div><br></div>