Christoph writes:
> Well, as nobody could help me so far (even the xfs mailing list is very
> slient ...), let's get a step further:
>
> My filesystem for backuppc data now contains about 160 million inodes.
> bonnie++ tells me that this filesystem is able to create 1000-4000
> inodes per second. What I get in reality is about 50-150 inodes created
> per second.
>
> My scenario: I did an incremental BackupPC_dump of a small clients
> filesystem containing about 70000 files in 5000 directories. Nearly no
> data had changed on the client, so BackupPC just had to create the full
> directory structure, about 5000 directories. While running,
> BackupPC_dump consumed 100% cpu time, i/o wait was 0%. Watching the
> creation rate of inodes, I got about 100 inodes/second.
> I then started BackupPC_dump using Devel::Profiling as a profiling tool.
> Result of dprofpp is:
>
> Total Elapsed Time = 503.3859 Seconds
> User+System Time = 116.8679 Seconds
> Exclusive Times
> %Time ExclSec CumulS #Calls sec/call Csec/c Name
> 13.7 16.06 70.642 5063 0.0032 0.0140 BackupPC::View::dirCache
> 8.33 9.735 27.133 30255 0.0003 0.0009 BackupPC::Attrib::read
> 7.66 8.950 19.010 33326 0.0003 0.0006 BackupPC::FileZIO::read
> 6.39 7.465 7.465 30378 0.0002 0.0002 BackupPC::Lib::dirRead
> 6.33 7.399 7.399 29709 0.0002 0.0002
> Compress::Zlib::inflateStream::inflate
> [snip]
Thanks for the interesting analysis.
dirCache is called 5063 times, so that is the number of directories.
But BackupPC::Attrib::read is called 30255 times, so it appears that
6 backups are being merged together (a full and 5 prior incrementals).
For every directory on the client, the attrib file and directory
contents of 6 different directories on the server need to be read.
I assume that's how you set $Conf{IncrLevels}.
I'm a bit surprised you see 100% CPU usage during the backup, since
it reports:
Total Elapsed Time = 503.3859 Seconds
User+System Time = 116.8679 Seconds
In any case, this seems to be behaving as expected. BackupPC is
doing quite a lot more work than rsync (ie: it has to read multiple
directories and files per directory it creates), and it is in perl,
not compiled C code. You could repeat your test immediately after
a full backup (ie: do an incremental of level 1) and see if it is
much better.
Craig
-------------------------------------------------------------------------
Sponsored by: SourceForge.net Community Choice Awards: VOTE NOW!
Studies have shown that voting for your favorite open source project,
along with a healthy diet, reduces your potential for chronic lameness
and boredom. Vote Now at http://www.sourceforge.net/community/cca08
_______________________________________________
BackupPC-users mailing list
BackupPC-users AT lists.sourceforge DOT net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
|