hymie!> So one of my machines has a few zillion tiny little files.
Here's your problem right there. Reading all the metadata for those
files is the killer.
If the client is beefy enough, you can try splitting it up so there
are multiple readers all hitting the disk at once. This will
parallize your reading of metadata and speed things up if the disk IO
subsystem on the client can handle the load.
hymie!> My full backup took 44 hours. I can deal with that if I have to.
hymie!> My incremental backup has been running for 10 hours now.
hymie!> Files=71,560 Bytes=273,397,510 Bytes/sec=7,666 Errors=0
hymie!> Files Examined=14,675,372
Yeah, you're getting killed by the time it's taking to examine each
and every one of your millions of files to find those which have
changed.
You'll get a huge speed up if you can a) fix the application to NOT
write so many small files. b) see if they can spread them out across
lots of directories (look for hashing files across directores in
google, lots of NNTP servers had this issue with news spools in the
past) and then you can put in multiple client entries so that you look
at each sub-set of files in parallel.
John
------------------------------------------------------------------------------
Benefiting from Server Virtualization: Beyond Initial Workload
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve
application availability and disaster protection. Learn more about boosting
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users
|