I've got a fairly big filesystem (3TB, 15M files) of which I want to (test) restore a part. I know that if the backend DB is slow the "Building file list" stage can take some time, but I have it stri
Author: Francisco Javier Funes Nieto <esencia AT gmail DOT com>
Date: Wed, 12 Apr 2017 09:45:26 +0200
The missing question, which Database Catalog are you using ? El 12 abr. 2017 9:26 a. m., "Tom Yates" <madlists AT teaparty DOT net> escribió: I've got a fairly big filesystem (3TB, 15M files) of whi
The missing question, which Database Catalog are you using ? The catalogue database is on MySQL, again using the version that comes with CentOS 6 (5.1.73). -- Tom Yates - http://www.teaparty.net --
Author: Martin Simmons <martin AT lispworks DOT com>
Date: Wed, 12 Apr 2017 11:07:07 +0100
Does that file tree have a lot of hard links (I think the add command only makes those queries for hard links)? If so, then using Bacula 7 might help (see "restore optimizespeed" in http://www.bacul
That might well be it. "find . -type f -links +1" says that, of the ten million or so files in that tree, around a million have more than one hard link (some have several hundred, don't ask me why).
Hello, Bacula was designed to handle a maximum of 10M files (Bacula 5.0.x). Since then file systems have grown a lot and so has Bacula. We have redesigned Bacula a number of times to be able to cope
So it turns out that going to 7.4.7 was enough. The FD clients all stayed on CentOS 6's 5.0.0, and seem to be fine (though testing continues). "optimizespeed=true" seems to be the default in 7.x; in
Hello Tom, Thanks for the feedback. I am pleased that you got such a nice improvement in performance. From the numbers you cite, it doesn't seem likely you will need any of the other ideas for possib