<div dir="ltr">Hi Vijay,<div><br></div><div>Please find the volume info here :- </div><div><br></div><div><div>[root@cpu01 ~]# gluster volume info</div><div><br></div><div>Volume Name: ds01</div><div>Type: Distributed-Replicate</div><div>Volume ID: 369d3fdc-c8eb-46b7-a33e-0a49f2451ff6</div><div>Status: Started</div><div>Number of Bricks: 48 x 2 = 96</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: cpu01:/bricks/1/vol1</div><div>Brick2: cpu02:/bricks/1/vol1</div><div>Brick3: cpu03:/bricks/1/vol1</div><div>Brick4: cpu04:/bricks/1/vol1</div><div>Brick5: cpu01:/bricks/2/vol1</div><div>Brick6: cpu02:/bricks/2/vol1</div><div>Brick7: cpu03:/bricks/2/vol1</div><div>Brick8: cpu04:/bricks/2/vol1</div><div>Brick9: cpu01:/bricks/3/vol1</div><div>Brick10: cpu02:/bricks/3/vol1</div><div>Brick11: cpu03:/bricks/3/vol1</div><div>Brick12: cpu04:/bricks/3/vol1</div><div>Brick13: cpu01:/bricks/4/vol1</div><div>Brick14: cpu02:/bricks/4/vol1</div><div>Brick15: cpu03:/bricks/4/vol1</div><div>Brick16: cpu04:/bricks/4/vol1</div><div>Brick17: cpu01:/bricks/5/vol1</div><div>Brick18: cpu02:/bricks/5/vol1</div><div>Brick19: cpu03:/bricks/5/vol1</div><div>Brick20: cpu04:/bricks/5/vol1</div><div>Brick21: cpu01:/bricks/6/vol1</div><div>Brick22: cpu02:/bricks/6/vol1</div><div>Brick23: cpu03:/bricks/6/vol1</div><div>Brick24: cpu04:/bricks/6/vol1</div><div>Brick25: cpu01:/bricks/7/vol1</div><div>Brick26: cpu02:/bricks/7/vol1</div><div>Brick27: cpu03:/bricks/7/vol1</div><div>Brick28: cpu04:/bricks/7/vol1</div><div>Brick29: cpu01:/bricks/8/vol1</div><div>Brick30: cpu02:/bricks/8/vol1</div><div>Brick31: cpu03:/bricks/8/vol1</div><div>Brick32: cpu04:/bricks/8/vol1</div><div>Brick33: cpu01:/bricks/9/vol1</div><div>Brick34: cpu02:/bricks/9/vol1</div><div>Brick35: cpu03:/bricks/9/vol1</div><div>Brick36: cpu04:/bricks/9/vol1</div><div>Brick37: cpu01:/bricks/10/vol1</div><div>Brick38: cpu02:/bricks/10/vol1</div><div>Brick39: cpu03:/bricks/10/vol1</div><div>Brick40: cpu04:/bricks/10/vol1</div><div>Brick41: cpu01:/bricks/11/vol1</div><div>Brick42: cpu02:/bricks/11/vol1</div><div>Brick43: cpu03:/bricks/11/vol1</div><div>Brick44: cpu04:/bricks/11/vol1</div><div>Brick45: cpu01:/bricks/12/vol1</div><div>Brick46: cpu02:/bricks/12/vol1</div><div>Brick47: cpu03:/bricks/12/vol1</div><div>Brick48: cpu04:/bricks/12/vol1</div><div>Brick49: cpu01:/bricks/13/vol1</div><div>Brick50: cpu02:/bricks/13/vol1</div><div>Brick51: cpu03:/bricks/13/vol1</div><div>Brick52: cpu04:/bricks/13/vol1</div><div>Brick53: cpu01:/bricks/14/vol1</div><div>Brick54: cpu02:/bricks/14/vol1</div><div>Brick55: cpu03:/bricks/14/vol1</div><div>Brick56: cpu04:/bricks/14/vol1</div><div>Brick57: cpu01:/bricks/15/vol1</div><div>Brick58: cpu02:/bricks/15/vol1</div><div>Brick59: cpu03:/bricks/15/vol1</div><div>Brick60: cpu04:/bricks/15/vol1</div><div>Brick61: cpu01:/bricks/16/vol1</div><div>Brick62: cpu02:/bricks/16/vol1</div><div>Brick63: cpu03:/bricks/16/vol1</div><div>Brick64: cpu04:/bricks/16/vol1</div><div>Brick65: cpu01:/bricks/17/vol1</div><div>Brick66: cpu02:/bricks/17/vol1</div><div>Brick67: cpu03:/bricks/17/vol1</div><div>Brick68: cpu04:/bricks/17/vol1</div><div>Brick69: cpu01:/bricks/18/vol1</div><div>Brick70: cpu02:/bricks/18/vol1</div><div>Brick71: cpu03:/bricks/18/vol1</div><div>Brick72: cpu04:/bricks/18/vol1</div><div>Brick73: cpu01:/bricks/19/vol1</div><div>Brick74: cpu02:/bricks/19/vol1</div><div>Brick75: cpu03:/bricks/19/vol1</div><div>Brick76: cpu04:/bricks/19/vol1</div><div>Brick77: cpu01:/bricks/20/vol1</div><div>Brick78: cpu02:/bricks/20/vol1</div><div>Brick79: cpu03:/bricks/20/vol1</div><div>Brick80: cpu04:/bricks/20/vol1</div><div>Brick81: cpu01:/bricks/21/vol1</div><div>Brick82: cpu02:/bricks/21/vol1</div><div>Brick83: cpu03:/bricks/21/vol1</div><div>Brick84: cpu04:/bricks/21/vol1</div><div>Brick85: cpu01:/bricks/22/vol1</div><div>Brick86: cpu02:/bricks/22/vol1</div><div>Brick87: cpu03:/bricks/22/vol1</div><div>Brick88: cpu04:/bricks/22/vol1</div><div>Brick89: cpu01:/bricks/23/vol1</div><div>Brick90: cpu02:/bricks/23/vol1</div><div>Brick91: cpu03:/bricks/23/vol1</div><div>Brick92: cpu04:/bricks/23/vol1</div><div>Brick93: cpu01:/bricks/24/vol1</div><div>Brick94: cpu02:/bricks/24/vol1</div><div>Brick95: cpu03:/bricks/24/vol1</div><div>Brick96: cpu04:/bricks/24/vol1</div><div>Options Reconfigured:</div><div>nfs.disable: off</div><div>user.cifs: enable</div><div>auth.allow: 10.10.0.*</div><div>performance.quick-read: off</div><div>performance.read-ahead: off</div><div>performance.io-cache: off</div><div>performance.stat-prefetch: off</div><div>cluster.eager-lock: enable</div><div>network.remote-dio: enable</div><div>cluster.quorum-type: auto</div><div>cluster.server-quorum-type: server</div><div>storage.owner-uid: 36</div><div>storage.owner-gid: 36</div><div>server.allow-insecure: on</div><div>[root@cpu01 ~]#</div></div><div><br></div><div>Thanks,</div><div>punit</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Feb 17, 2015 at 6:16 AM, Ben Turner <span dir="ltr">&lt;<a href="mailto:bturner@redhat.com" target="_blank">bturner@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">----- Original Message -----<br>
&gt; From: &quot;Joe Julian&quot; &lt;<a href="mailto:joe@julianfamily.org">joe@julianfamily.org</a>&gt;<br>
&gt; To: &quot;Punit Dambiwal&quot; &lt;<a href="mailto:hypunit@gmail.com">hypunit@gmail.com</a>&gt;, <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a>, &quot;Humble Devassy Chirammal&quot;<br>
&gt; &lt;<a href="mailto:humble.devassy@gmail.com">humble.devassy@gmail.com</a>&gt;<br>
&gt; Sent: Monday, February 16, 2015 3:32:31 PM<br>
&gt; Subject: Re: [Gluster-users] Gluster performance on the small files<br>
&gt;<br>
&gt;<br>
&gt; On 02/12/2015 10:58 PM, Punit Dambiwal wrote:<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; Hi,<br>
&gt;<br>
&gt; I have seen the gluster performance is dead slow on the small files...even i<br>
&gt; am using the SSD....it&#39;s too bad performance....even i am getting better<br>
&gt; performance in my SAN with normal SATA disk...<br>
&gt;<br>
&gt; I am using distributed replicated glusterfs with replica count=2...i have all<br>
&gt; SSD disks on the brick...<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; root@vm3:~# dd bs=64k count=4k if=/dev/zero of=test oflag=dsync<br>
&gt;<br>
&gt; 4096+0 records in<br>
&gt;<br>
&gt; 4096+0 records out<br>
&gt;<br>
&gt; 268435456 bytes (268 MB) copied, 57.3145 s, 4.7 MB/s<br>
&gt;<br>
<br>
</span>This seems pretty slow, even if you are using gigabit.  Here is what I get:<br>
<br>
[root@gqac031 smallfile]# dd bs=64k count=4k if=/dev/zero of=/gluster-emptyvol/test oflag=dsync<br>
<span class="">4096+0 records in<br>
4096+0 records out<br>
</span>268435456 bytes (268 MB) copied, 10.5965 s, 25.3 MB/s<br>
<br>
FYI this is on my 2 node pure replica + spinning disks(RAID 6, this is not setup for smallfile workloads.  For smallfile I normally use RAID 10) + 10G.<br>
<br>
The single threaded DD process is defiantly a bottle neck here, the power in distributed systems is doing things in parallel across clients / threads.  You may want to try smallfile:<br>
<br>
<a href="http://www.gluster.org/community/documentation/index.php/Performance_Testing" target="_blank">http://www.gluster.org/community/documentation/index.php/Performance_Testing</a><br>
<br>
Smallfile command used - python /small-files/smallfile/smallfile_cli.py --operation create --threads 8 --file-size 64 --files 10000 --top /gluster-emptyvol/ --pause 1000 --host-set &quot;client1, client2&quot;<br>
<br>
total threads = 16<br>
total files = 157100<br>
total data =     9.589 GB<br>
 98.19% of requested files processed, minimum is  70.00<br>
41.271602 sec elapsed time<br>
3806.491454 files/sec<br>
3806.491454 IOPS<br>
237.905716 MB/sec<br>
<br>
If you wanted to do something similar with DD you could do:<br>
<br>
&lt;my script&gt;<br>
for i in `seq 1..4`<br>
do<br>
    dd bs=64k count=4k if=/dev/zero of=/gluster-emptyvol/test$i oflag=dsync &amp;<br>
done<br>
for pid in $(pidof dd); do<br>
    while kill -0 &quot;$pid&quot;; do<br>
        sleep 0.1<br>
    done<br>
done<br>
<br>
# time myscript.sh<br>
<br>
Then do the math to figure out the MB / sec of the system.<br>
<span class="HOEnZb"><font color="#888888"><br>
-b<br>
</font></span><span class="im HOEnZb"><br>
&gt;<br>
&gt;<br>
&gt; root@vm3:~# dd bs=64k count=4k if=/dev/zero of=test conv=fdatasync<br>
&gt;<br>
&gt; 4096+0 records in<br>
&gt;<br>
&gt; 4096+0 records out<br>
&gt;<br>
&gt; 268435456 bytes (268 MB) copied, 1.80093 s, 149 MB/s<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; How small is your VM image? The image is the file that GlusterFS is serving,<br>
&gt; not the small files within it. Perhaps the filesystem you&#39;re using within<br>
&gt; your VM is inefficient with regard to how it handles disk writes.<br>
&gt;<br>
&gt; I believe your concept of &quot;small file&quot; performance is misunderstood, as is<br>
&gt; often the case with this phrase. The &quot;small file&quot; issue has to do with the<br>
&gt; overhead of finding and checking the validity of any file, but with a small<br>
&gt; file the percentage of time doing those checks is proportionally greater.<br>
&gt; With your VM image, that file is already open. There are no self-heal checks<br>
&gt; or lookups that are happening in your tests, so that overhead is not the<br>
&gt; problem.<br>
&gt;<br>
</span><div class="HOEnZb"><div class="h5">&gt; _______________________________________________<br>
&gt; Gluster-users mailing list<br>
&gt; <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
&gt; <a href="http://www.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a><br>
</div></div></blockquote></div><br></div>