Mita201
ADSM.ORG Senior Member
I am facing a common problem with large number of files, but I am not very experienced with the solutions to it:
I have jfs2 filesystem that, at the moment has about 7mil relatively small files (up to few hundreds kbs each) there is about 20 000 - 50 000 new files generated daily at the moment which will slowly decrease in next year or two to the level of about 5 000 files daily. I have journal that works, it goes to disk pool, I did a bit of txngroup optimization, but still it takes about 18 hours for incremental backup. I am expecting to have about 400mil files in that fs once, and maybe more. Is there any kind of optimization I can count on, or I should think about solving the issue on some other way?
I have jfs2 filesystem that, at the moment has about 7mil relatively small files (up to few hundreds kbs each) there is about 20 000 - 50 000 new files generated daily at the moment which will slowly decrease in next year or two to the level of about 5 000 files daily. I have journal that works, it goes to disk pool, I did a bit of txngroup optimization, but still it takes about 18 hours for incremental backup. I am expecting to have about 400mil files in that fs once, and maybe more. Is there any kind of optimization I can count on, or I should think about solving the issue on some other way?