[Wlug] large file performance on linux

brad noyes maitre at ccs.neu.edu
Wed May 16 10:53:16 EDT 2007


Hello All,

I am seeing some really slow performance regarding large files on linux. I
write a lot of data points from a light sensor. The stream is about 53 Mb/s and
i need to keep this rate for 7 minutes, that's a total of about 22Gb. I
can sustain 53Mb/s pretty well until the file grows to over 1Gb or so, then
things hit the wall and the writes to the filesystem can't keep up. The writes
go from 20ms in duration to 500ms. I assume the filesystem/operating system 
is caching writes. Do you have any suggestions on how to speed up performance 
on these writes, filesystem options, kernel options, other strategies, etc?

Things I have tried:
 - I have tried this on a ext3 file system as well as an xfs filesystem 
   with the same result.

 - I have also tried spooling over several files (a la multiple volumes) 
   but i see no difference in performance. In fact, i think this actually
   hinders performance a bit.

 - I keep my own giant memory buffer where all the data is stored and 
   then it is written to disk in a background thread. This helps, but
   i run out of space in the buffer before i finish taking data.

Thanks,
   -- Brad



More information about the Wlug mailing list