<html>
  <head>
    <meta content="text/html; charset=windows-1252"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <br>
    <br>
    <div class="moz-cite-prefix">On 03/31/2015 05:12 AM, Lilley, John F.
      wrote:<br>
    </div>
    <blockquote
      cite="mid:CDF0DE10-8DD3-41AD-99E8-8929417F557C@caltech.edu"
      type="cite">
      <meta http-equiv="Content-Type" content="text/html;
        charset=windows-1252">
      <div>Hi,</div>
      <div><br>
      </div>
      <div>I'd like to shrink my aws/ebs-based *distribute-only* gluster
        file system by migrating the data to other already existing,
        active and partially utilized bricks but found the
        'replace-brick start' mentioned in the documentation is now
        deprecated.  I see that there has been some back and forth on
        the mailing list regarding migrating data using self-heal on a
        replicated system but not so much on a distribute-only file
        system. Can anyone tell me the blessed way of doing this in
        3.6.2? Is there one?</div>
      <div><br>
      </div>
      <div>To be clear, all of the ebs-based bricks are partially
        utilized at this point so I'd need a method to migrate the data
        first.</div>
      <div><br>
      </div>
    </blockquote>
    <br>
    If I understand you correctly, you want to replace a brick in a
    distribute volume with one of lesser capacity. You could first add a
    new brick and then remove the existing brick with remove-brick
    start/status/commit sequence. Something like this:<br>
    ------------------------------------------------<br>
    <tt>[root@tuxpad ~]# gluster volume info testvol</tt><tt><br>
    </tt><tt> </tt><tt><br>
    </tt><tt>Volume Name: testvol</tt><tt><br>
    </tt><tt>Type: Distribute</tt><tt><br>
    </tt><tt>Volume ID: a89aa154-885c-4e14-8d3a-b555733b11f1</tt><tt><br>
    </tt><tt>Status: Started</tt><tt><br>
    </tt><tt>Number of Bricks: 3</tt><tt><br>
    </tt><tt>Transport-type: tcp</tt><tt><br>
    </tt><tt>Bricks:</tt><tt><br>
    </tt><tt>Brick1: 127.0.0.2:/home/ravi/bricks/brick1</tt><tt><br>
    </tt><tt>Brick2: 127.0.0.2:/home/ravi/bricks/brick2</tt><tt><br>
    </tt><tt>Brick3: 127.0.0.2:/home/ravi/bricks/brick3</tt><tt><br>
    </tt><tt>[root@tuxpad ~]# </tt><tt><br>
    </tt><tt>[root@tuxpad ~]# </tt><tt><br>
    </tt><tt>[root@tuxpad ~]# gluster volume add-brick testvol
      127.0.0.2:/home/ravi/bricks/brick{4..6}</tt><tt><br>
    </tt><tt>volume add-brick: success</tt><tt><br>
    </tt><tt>[root@tuxpad ~]# </tt><tt><br>
    </tt><tt>[root@tuxpad ~]# gluster volume info testvol</tt><tt><br>
    </tt><tt> </tt><tt><br>
    </tt><tt>Volume Name: testvol</tt><tt><br>
    </tt><tt>Type: Distribute</tt><tt><br>
    </tt><tt>Volume ID: a89aa154-885c-4e14-8d3a-b555733b11f1</tt><tt><br>
    </tt><tt>Status: Started</tt><tt><br>
    </tt><tt>Number of Bricks: 6</tt><tt><br>
    </tt><tt>Transport-type: tcp</tt><tt><br>
    </tt><tt>Bricks:</tt><tt><br>
    </tt><tt>Brick1: 127.0.0.2:/home/ravi/bricks/brick1</tt><tt><br>
    </tt><tt>Brick2: 127.0.0.2:/home/ravi/bricks/brick2</tt><tt><br>
    </tt><tt>Brick3: 127.0.0.2:/home/ravi/bricks/brick3</tt><tt><br>
    </tt><tt>Brick4: 127.0.0.2:/home/ravi/bricks/brick4</tt><tt><br>
    </tt><tt>Brick5: 127.0.0.2:/home/ravi/bricks/brick5</tt><tt><br>
    </tt><tt>Brick6: 127.0.0.2:/home/ravi/bricks/brick6</tt><tt><br>
    </tt><tt>[root@tuxpad ~]# </tt><tt><br>
    </tt><tt>[root@tuxpad ~]# </tt><tt><br>
    </tt><tt>[root@tuxpad ~]# </tt><tt><br>
    </tt><tt>[root@tuxpad ~]# gluster v remove-brick testvol
      127.0.0.2:/home/ravi/bricks/brick{1..3} start</tt><tt><br>
    </tt><tt>volume remove-brick start: success</tt><tt><br>
    </tt><tt>ID: d535675e-8362-4a44-a291-1e567a77531e</tt><tt><br>
    </tt><tt>[root@tuxpad ~]# gluster v remove-brick testvol
      127.0.0.2:/home/ravi/bricks/brick{1..3} status</tt><tt><br>
    </tt><tt>                                    Node
      Rebalanced-files          size       scanned      failures      
      skipped               status   run time in secs</tt><tt><br>
    </tt><tt>                               ---------      -----------  
      -----------   -----------   -----------   -----------        
      ------------     --------------</tt><tt><br>
    </tt><tt>                               localhost              
      10        0Bytes            20             0            
      0           <b> completed </b>              0.00</tt><tt><br>
    </tt><tt>[root@tuxpad ~]# </tt><tt><br>
    </tt><tt>[root@tuxpad ~]# gluster v remove-brick testvol
      127.0.0.2:/home/ravi/bricks/brick{1..3} commit</tt><tt><br>
    </tt><tt>Removing brick(s) can result in data loss. Do you want to
      Continue? (y/n) y</tt><tt><br>
    </tt><tt>volume remove-brick commit: success</tt><tt><br>
    </tt><b><tt>Check the removed bricks to ensure all files are
        migrated.</tt></b><b><tt><br>
      </tt></b><b><tt>If files with data are found on the brick path,
        copy them via a gluster mount point before re-purposing the
        removed brick. </tt></b><tt><br>
    </tt><tt>[root@tuxpad ~]# </tt><tt><br>
    </tt><tt>[root@tuxpad ~]# gluster volume info testvol</tt><tt><br>
    </tt><tt> </tt><tt><br>
    </tt><tt>Volume Name: testvol</tt><tt><br>
    </tt><tt>Type: Distribute</tt><tt><br>
    </tt><tt>Volume ID: a89aa154-885c-4e14-8d3a-b555733b11f1</tt><tt><br>
    </tt><tt>Status: Started</tt><tt><br>
    </tt><tt>Number of Bricks: 3</tt><tt><br>
    </tt><tt>Transport-type: tcp</tt><tt><br>
    </tt><tt>Bricks:</tt><tt><br>
    </tt><tt>Brick1: 127.0.0.2:/home/ravi/bricks/brick4</tt><tt><br>
    </tt><tt>Brick2: 127.0.0.2:/home/ravi/bricks/brick5</tt><tt><br>
    </tt><tt>Brick3: 127.0.0.2:/home/ravi/bricks/brick6</tt><tt><br>
    </tt><tt>[root@tuxpad ~]# </tt><tt><br>
    </tt>------------------------------------------------<br>
    Hope this helps.<br>
    Ravi<br>
    <br>
    <blockquote
      cite="mid:CDF0DE10-8DD3-41AD-99E8-8929417F557C@caltech.edu"
      type="cite">
      <div>
      </div>
      <div>Thank You,</div>
      <div>John</div>
      <div>
      </div>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="http://www.gluster.org/mailman/listinfo/gluster-users">http://www.gluster.org/mailman/listinfo/gluster-users</a></pre>
    </blockquote>
    <br>
  </body>
</html>