<p dir="ltr">FYI </p>
<div class="gmail_quote">On 19 May 2015 20:25, "Varadharajan S" <<a href="mailto:rajanvaradhu@gmail.com">rajanvaradhu@gmail.com</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><p dir="ltr">Hi, <br>
Replication means, I won't get space. Distribution is not like striping right? If one brick is not available in the volume, other bricks can distribute data in between. If I do any tuning will get solution? <br>
</p>
<div class="gmail_quote">On 19 May 2015 20:02, "Atin Mukherjee" <<a href="mailto:atin.mukherjee83@gmail.com" target="_blank">atin.mukherjee83@gmail.com</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><p dir="ltr"><br>
On 19 May 2015 17:10, "Varadharajan S" <<a href="mailto:rajanvaradhu@gmail.com" target="_blank">rajanvaradhu@gmail.com</a>> wrote:<br>
><br>
> Hi,<br>
><br>
> We are using Ubuntu 14.04 server and for storage purpose we configured gluster 3.5 as distributed volume and find the below details,<br>
><br>
> 1).4 Servers - 14.04 Ubuntu Server and each server disks free spaces are configured as ZFS raiddz2 volume<br>
><br>
> 2). Each server has /pool/gluster zfs volume and capacity as - 5 TB,8 TB,6 TB and 10 TB<br>
><br>
> 3). Bricks are - rep1,rep2,rep3 and st1 and all the bricks are connected as Distributed Volume and mounted on each system as,<br>
> <br>
> For E.x in rep1 -> mount -t glusterfs rep1:/glustervol /data.<br>
> rep2 -> mount -t glusterfs rep2:/glustervol /data<br>
> rep3 -> mount -t glusterfs rep3:/glustervol /data<br>
> st1 -> mount -t glusterfs st1:/glustervol /data<br>
><br>
> So we get /data is having around 29 TB and all our applications data's are stored in /data mount point.<br>
><br>
> Details about volume:<br>
><br>
> volume glustervol-client-0<br>
> type protocol/client<br>
> option send-gids true<br>
> option password b217da9d1d8b-bb55<br>
> option username 9d76-4553-8c75<br>
> option transport-type tcp<br>
> option remote-subvolume /pool/gluster<br>
> option remote-host rep1<br>
> option ping-timeout 42<br>
> end-volume<br>
><br>
> volume glustervol-client-1<br>
> type protocol/client<br>
> option send-gids true<br>
> option password b217da9d1d8b-bb55<br>
> option username jkd76-4553-5347<br>
> option transport-type tcp<br>
> option remote-subvolume /pool/gluster<br>
> option remote-host rep2<br>
> option ping-timeout 42<br>
> end-volume<br>
><br>
> volume glustervol-client-2<br>
> type protocol/client<br>
> option send-gids true<br>
> option password b217da9d1d8b-bb55<br>
> option username 19d7-5a190c2<br>
> option transport-type tcp<br>
> option remote-subvolume /pool/gluster<br>
> option remote-host rep3<br>
> option ping-timeout 42<br>
> end-volume<br>
><br>
> volume glustervol-client-3<br>
> type protocol/client<br>
> option send-gids true<br>
> option password b217da9d1d8b-bb55<br>
> option username c75-5436b5a168347<br>
> option transport-type tcp<br>
> option remote-subvolume /pool/gluster<br>
> option remote-host st1<br>
><br>
> option ping-timeout 42<br>
> end-volume<br>
><br>
> volume glustervol-dht<br>
> type cluster/distribute<br>
> subvolumes glustervol-client-0 glustervol-client-1 glustervol-client-2 glustervol-client-3<br>
> end-volume<br>
><br>
> volume glustervol-write-behind<br>
> type performance/write-behind<br>
> subvolumes glustervol-dht<br>
> end-volume<br>
><br>
> volume glustervol-read-ahead<br>
> type performance/read-ahead<br>
> subvolumes glustervol-write-behind<br>
> end-volume<br>
><br>
> volume glustervol-io-cache<br>
> type performance/io-cache<br>
> subvolumes glustervol-read-ahead<br>
> end-volume<br>
><br>
> volume glustervol-quick-read<br>
> type performance/quick-read<br>
> subvolumes glustervol-io-cache<br>
> end-volume<br>
><br>
> volume glustervol-open-behind<br>
> type performance/open-behind<br>
> subvolumes glustervol-quick-read<br>
> end-volume<br>
><br>
> volume glustervol-md-cache<br>
> type performance/md-cache<br>
> subvolumes glustervol-open-behind<br>
> end-volume<br>
><br>
> volume glustervol<br>
> type debug/io-stats<br>
> option count-fop-hits off<br>
> option latency-measurement off<br>
> subvolumes glustervol-md-cache<br>
> end-volume<br>
><br>
><br>
> ap@rep3:~$ sudo gluster volume info<br>
> <br>
> Volume Name: glustervol<br>
> Type: Distribute<br>
> Volume ID: 165b-XXXXX<br>
> Status: Started<br>
> Number of Bricks: 4<br>
> Transport-type: tcp<br>
> Bricks:<br>
> Brick1: rep1:/pool/gluster<br>
> Brick2: rep2:/pool/gluster<br>
> Brick3: rep3:/pool/gluster<br>
> Brick4: st1:/pool/gluster<br>
><br>
> Problem:<br>
><br>
> If we shutdown any of the bricks , the volume size is reduced (this is ok) but from the other servers , i can see my mount point /data but it's only listing contents and i can't write or edit any single files/folders. <br>
><br>
> Solution Required:<br>
><br>
> If anyone brick is not available, From other servers should allow for Write and edit functions<br>
This is expected since you are using distributed volume. You wouldn't be able to write/edit files belonging to the brick which is down. Solution would be to migrate to distributed replicate volume.<br>
><br>
> Please let us know, what can i try further ?<br>
><br>
> Regards,<br>
> Varad<br>
><br>
><br>
> _______________________________________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
> <a href="http://www.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
</p>
</blockquote></div>
</blockquote></div>