Tobias Bartel wrote:
>> Even with 800,000 files, that sounds very slow. How much data is
>> involved, how is it stored and how fast is your database server?
>
> It's about 70GB of data, stored on a Raid5 (3Ware controller).
>
> The database is a SQLite one, on the same machine but on a Software
> Raid 1.
>
> The backup device is an LTO3 connected via SCSI
>
> OS is a Debian stable.
>
>
> I already thought about moving the Database to MySQL but there is
> already a MySQL Server on the same box, it is a slave for our MySQL
> master and used for hourly Backups of our database (Stop the
> replication, do the backups and start the replication again).
> I don't really like the idea of adding a DB to the Slave that isn't on
> the master, nor do i like the idea of hacking up some custom MySQL
> install that runs parallel caus that will cost me with every future
> update.
Perhaps a Postgres on the same host?
> To be honest, i didn't expect that SQLite could be the bottle neck, it
> just can't be that slow. What made me think that its the number files,
> is that when i do an ls in that directory it takes ~15min before I see
> any output.
That more likely to be ls playing tricks with you.. Try:
ls -f | head (or just ls -f)
--
Jesper
-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users
|