On 23/09/10 15:26, Andrés Yacopino wrote:
> I think i am getting worst performance because of ramdon disk access
> speed, is that true?
>
Yes. If you use the time command on your tar process you will find it is
similarly slow.
Actually it's not so much random disk access speed as the fixed time
involved in stat() and open() on each file, no matter what its size is.
Things are compounded when there are a lot of files in one directory.
My experiments with GFS2 show that past 4000 files/directory the
performance for ls and open operations deteriorate rapidly. Other
filesystems have different thresholds but they _ALL_ perform poorly when
a directory has "too many" files in it.
2 examples: 2 filesystems, both 1Tb, both 95% full
One has 7,650 files in it. That copies to the spool disk at an average
speed of 80Mb/s
One has 3,500,000 files in it. That one only averages 15Mb/s to the
spool disk.
(Spool is all SSD and the FD-SD link is 1Gb/s. The limits are imposed by
the -fd machine's filesystem and underlying disk arrays. If the
filesystems are in use then things get much slower.)
------------------------------------------------------------------------------
Nokia and AT&T present the 2010 Calling All Innovators-North America contest
Create new apps & games for the Nokia N8 for consumers in U.S. and Canada
$10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing
Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store
http://p.sf.net/sfu/nokia-dev2dev
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users
|