Re: Speed up 400GB backup?
2004-07-20 17:40:55
Hi, Frank,
on Dienstag, 20. Juli 2004 at 07:41 you wrote to amanda-users:
>> 420GB is not the total amount per night. Something is bogging this down
>> though and I don't know what. I am not using holding disks because the
>> majority of data is being backed up from one set of disks to another on
>> the same machine. This one machine has a set of RAID 10 disks. These
>> disks are backed up by amanda and put onto a set of RAID 5 disks.
FS> OK, I was assuming a different setup. Having a holding disk would let
FS> you run multiple dumps in parallel. Wouldn't help much (if any) when
FS> its all on one machine, but can really speed up your overall time if
FS> you have multiple clients.
Given Joshua's note about having data and backup on the same
controller I would just suggest adding a cheap'n'huge IDE-drive (and
controller, if necessary) for a holdingdisk.
This will speed things up locally, too. Think parallel dumping AND the
fact that people could access data at ~normal speed even while the
holdingdisk is still feeding the tape (while this is still not the
solution here, estimates ain't done on the holdingdisk ....)
Having a separate holdingdisk is never a bad thing with AMANDA IMHO.
>> As far
>> as assigning spindle #s goes I don't quite understand why I would set
>> that. I have inparallel set to 4 and then didn't define maxdumps, so I
>> would assume that not more than 1 dumper would get started on a machine
>> at once. Am I getting this right?
FS> I think maxdumps defaults to 2 but I may be wrong (someone else should
FS> jump in here).
It is 10. ( grep -r "define MAXDUMPS" amanda-2.4.4-p3 )
>> Estimate Time (hrs:min) 7:30
FS> Here's your runtime problem, 7.5 hours for estimates .
Yep.
>> Run Time (hrs:min) 10:35
>> Dump Time (hrs:min) 2:52 0:29 2:23
FS> Three hours for dumps doesn't seem too bad. It could probably
FS> be improved some, but the estimates are what's killing you.
Yep again.
FS> As for the estimates, are you using dump or tar? Look in the
FS> *debug files on the clients and see which one was taking all the time
FS> (I'm guessing venus since it looks like you did a force on bda1).
FS> Does that filesystem have millions of small files?
FS> I'm not sure of the best way to speed up estimates, other than a
FS> faster disk system. Perhaps someone else on the list has some ideas.
My idea is to request more details here.
Relevant dumptype-definition, local/remote-info, df venus:/home, etc
...
--
best regards,
Stefan
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- Re: Speed up 400GB backup?, (continued)
- Re: Speed up 400GB backup?, Mike Fedyk
- Re: Speed up 400GB backup?, Gene Heskett
- Re: Speed up 400GB backup?, Paul Bijnens
- Re: Speed up 400GB backup?,
Stefan G. Weichinger <=
- Re: Speed up 400GB backup?, Kris Vassallo
- Re: Speed up 400GB backup?, Joshua Baker-LePain
- Re: Speed up 400GB backup?, Kris Vassallo
- Re: Speed up 400GB backup?, Stefan G. Weichinger
- Re: Speed up 400GB backup?, Joshua Baker-LePain
- Re: Speed up 400GB backup?, Stefan G. Weichinger
- Re: Speed up 400GB backup?, Andreas Sundstrom
- Re: Speed up 400GB backup?, Kris Vassallo
- Re: Speed up 400GB backup?, Joshua Baker-LePain
- Re: Speed up 400GB backup?, Mike Fedyk
|
|
|