[Gluster-users] Does Gluster support fast random reads?

Raghavendra G raghavendra at gluster.com
Fri Dec 11 18:39:08 UTC 2009


Hi Stefan,

since you are reading 10-100 consecutive blocks in the middle, read-ahead will help. Also for smaller files (sizes less than 1000KB), we have a quick-read translator, which can really boost the performance.

regards,
----- Original Message -----
From: "Stefan Thomas" <thomas at eload24.com>
To: gluster-users at gluster.org
Sent: Friday, December 11, 2009 5:55:56 PM GMT +04:00 Abu Dhabi / Muscat
Subject: [Gluster-users] Does Gluster support fast random reads?

Hey list,

My application requires storing files of up to about 50 MB, however it 
only ever reads individual 8KB blocks. I'm wondering if GlusterFS is a 
good fit in my situation, I'm looking for a solution that allows me to 
fetch these individual blocks with relatively low latency. My typical 
request requires me to fetch the first two or three blocks, the last 
block and 10-100 mostly consecutive blocks from the middle.



We're currently using several Apache servers and HTTP Range requests to 
deliver the data which completes in about 30ms on average. Now I'm 
looking for a solution that will provide similar performance (anything 
under 100ms would be acceptable) while also providing replication and 
better scalability.

GlusterFS seems to be a perfect fit, but is it optimized for random 
access reads? Should I store the 8KB blocks as individual files instead?

Your opinions are very much appreciated!


Cheers,

Stefan Thomas
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users



More information about the Gluster-users mailing list