Kai Zimmer wrote:
> the machines are:
> kirk (amanda-server): 2x650 P3; IDE-LVM (hold: 2, local 6 disks), Linux,
> Giga-Ethernet
How much holding disk does that spell? Verify the setting in
amanda.conf, you don't want to be limited here.
> before: 1x244 Opteron, single-SCSI-Disk U160, Linux, Giga-Ethernet
> crusher: 1x242 Opteron, single-SCSI-Disk U160, Linux, Giga-Ethernet
> hera: 2x1,4 Ghz P3, SCSI-LVM (2 disks), Linux, Giga-Ethernet
> kira: 2x1,4 Ghz P3, IDE-LVM (8 disks), Linux, Giga-Ethernet
> lore: 2x1,8 Ghz Xeon, SCSI-HW-Raid (10 disks R5), Linux, Giga-Ethernet
> zeus: 2x400 Mhz Sparc, single-SCSI-Disks (4), Solaris 8, Giga-Ethernet
I'd venture to try `compress server' here on all but the Sparc machine.
Seem to be nice beasts to me...
> USAGE BY TAPE:
> Label Time Size % Nb
> volume008 30:30 193301.4 93.5 7
> volume009 11:01 199313.5 96.4 1
> volume010 15:47 192761.9 93.3 2
> volume011 7:13 130811.4 63.3 1
[...]
> taper: tape volume008 kb 389149248 fm 8 writing file: No space left on
> device
[...]
> taper: tape volume009 kb 305159648 fm 2 writing file: No space left on
> device
[...]
> taper: tape volume010 kb 324967808 fm 3 writing file: No space left on
> device
[...]
> taper: tape volume011 kb 133950912 fm 1 [OK]
Notice you hit EOT at between 300GB and 390GB, so your data is quite
compressible, and you are in fact using hardware compression. Is that
LTO-2? Anyway, the successful size of the tapes is only 200GB, which
means that amanda wrote more than 100GB to tape, only to run into EOT
and start over.
Conclusion: either give a more realistic estimate of effective tape
length in your tapetype definition so amanda doesn't try to schedule a
huge flush only to fail; or use software compression with the true tape
length (that would help with holding disk space as well). And moreover,
break your disks in smaller DLEs; that way when amanda hits EOT, she
only needs to restart a few GB, and not hundreds. That alone should cut
your backup time down by a third.
Add to that the better parallelizing, and you're down to less than a
weekend.
> before /home 0 2904652029046520 -- 215:092250.1 214:162259.4
> crusher /home 0 1616860016168600 -- 205:041314.1 240:581118.3
> hera /home 0 3058162030581620 -- 419:161215.7 540:09 943.6
> kira /home 0 8711075287110752 -- 1291:561123.8 431:153366.6
> kira -me/konvert 0 133950848133950848 -- 1633:501366.4 432:315161.7
> kira -re_rehbein 1 197190 197190 -- 33:11 99.0 11:03 297.5
> kirk /home/clip 0 110277408110277408 -- 577:093184.6 516:113560.6
> kirk /ohne-clip 0 204097056204097056 -- 1561:002179.1 660:325149.8
> lore /local/home 0 104371080104371080 -- 1059:461641.4 489:043556.8
> lore -me/sokirko 0 85726108572610 -- 116:181228.6 169:18 843.9
> zeus /home 0 90030409003040 -- 139:311075.5 165:25 907.2
As far as I see, you've got dump rates of consistently over 1MB/s.
That's not bad I think, I'm sometimes getting worse. The longest runner
would be kira:.../konvert, 134GB at 1.3MB/s gives 100000s which spells
27h to me. Plus two other disks on the same machine that cannot be
parallelized. :-( Cutting into smaller pieces would stop them all from
doing a full the same day, though.
Alex
--
Alexander Jolk / BUF Compagnie
tel +33-1 42 68 18 28 / fax +33-1 42 68 18 29
|