[Gluster-users] Suggestions welcome for expanded single array and redundancy

Mohit Anchlia mohitanchlia at gmail.com
Fri Jun 3 16:43:16 UTC 2011


All you need to do is add 2 new bricks to existing volume. Then run
fix-layout because existing directories will still have old hash
mapping and then run re-balance which will copy files that hash to new
hash mapping on the new bricks, this is only required if you want
existing directories to hash to new bricks. Other option could be to
create new dirs. But it depends on your requirements.

http://www.gluster.com/community/documentation/index.php/Gluster_3.2:_Rebalancing_Volume_to_Fix_Layout_Changes


On Fri, Jun 3, 2011 at 1:38 AM, Stewart Campbell
<stewartcampbell at assureprograms.com.au> wrote:
> Hi
>
> We have started evaluating Glusterfs and I am very impressed with what we have seen and tested.  I am hoping someone can point me in the right direction or offer a solution.  We have multiple servers which we wish to use for storage.  We would like to have a single storage array accessible by client machines on the network.  We would like to achieve N+2 redundancy but have the ability to keep adding servers to increase the available overall storage.  We basically want a big NFS accessible network storage with underlying redundancy across storage on different servers.
>
> Example
> 3 x Servers - each with Gluster configured with 1TB of storage
> Configured for replication
> Available storage 1TB
> Redundancy 2 x copies
>
>
> The above is straight forward.  What we want to achieve is
> Add more servers (for the sake of the exercise specs will be identical)
> Keep the same level of redundancy N+2
> Increase storage
> If 2 more servers were installed (5 x 1TB) - (2 redundant) = 3TB Available
> If 4 more servers were installed (7 x 1TB) - (2 redundant) = 5TB Available
> and so on
>
> Is this possible with glusterfs and any suggestions on how to achieve this.  Please keep in mind we are expecting data growth and would like to keep adding servers to increase storage.
>
> Regards
> Stewart
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>



More information about the Gluster-users mailing list