[Wlug] large file performance on linux
john at stoffel.org
Wed May 16 15:19:17 EDT 2007
>>>>> "brad" == brad noyes <maitre at ccs.neu.edu> writes:
brad> I am seeing some really slow performance regarding large files
brad> on linux. I write a lot of data points from a light sensor. The
brad> stream is about 53 Mb/s and i need to keep this rate for 7
brad> minutes, that's a total of about 22Gb. I can sustain 53Mb/s
brad> pretty well until the file grows to over 1Gb or so, then things
brad> hit the wall and the writes to the filesystem can't keep up. The
brad> writes go from 20ms in duration to 500ms. I assume the
brad> filesystem/operating system is caching writes. Do you have any
brad> suggestions on how to speed up performance on these writes,
brad> filesystem options, kernel options, other strategies, etc?
You've already had a good bunch of suggestions, but I've got some
questions on your hardware.
- memory - 12gb I know
- RAID setup at all?
One way to get more performance would be to add another disk or two
and to stripe your data between them. This assumes you have enough
PCI bus bandwidth available as well. You don't say how you're
capturing the light sensor data, but it's obviosly not over a serial
port or some other slow device. Network? So if you've got 53
Mbyte/second comming into the system, and another 53Mbytes/second
writing out to disk, then you're starting to get close to the
132Mbytes/sec bandwidth of the PCI bus.
Finding a motherboard with two or more PCI busses would help. Or
something with PCI-E busses. It all depends on your budget and the
data acquisition tool you're using.
More information about the Wlug