OK, at our next restore, I will send an outpu. We have a 300 meg
bandwidth down and up, so that isn't the problem. Possibly the amount
of files is the problem.
Dan Langille wrote:
> On Oct 8, 2008, at 4:15 PM, Joe Mannuzza wrote:
>
>>
>> Dan Langille wrote:
>>>
>>> On Oct 8, 2008, at 3:57 PM, Joe Mannuzza wrote:
>>>
>>>> Dan Langille wrote:
>>>>>
>>>>> On Oct 8, 2008, at 1:11 PM, Joe Mannuzza wrote:
>>>>>
>>>>>> Dan Langille wrote:
>>>>>>>
>>>>>>> On Oct 2, 2008, at 1:54 PM, Joe Mannuzza wrote:
>>>>>>>>
>>>>>>>> Has anyone had issues doing a restore of large sets of data via
>>>>>>>> bacula?
>>>>>>>> Specifically, has anyone also noticed the process taxing the CPU?
>>>>>>>
>>>>>>>
>>>>>>> Can you be more specific regarding these processes?
>>>>>>>
>>>>>>> At what stage of the backup does the CPU get taxed?
>>>>>>>
>>>>>>> It is normal for the tree building phase to require lots of CPU
>>>>>>>
>>>>>
>>>>>> Dan,
>>>>>>
>>>>>> Thanks for the response. I am unsure at what stage it gets
>>>>>> bogged down. Is there a way to check after the fact?
>>>>>
>>>>> Not that I can think of.
>>>>>
>>>>>> Also, there were many backups going on at the same time of the
>>>>>> failure- around 40. Server info: 2 Xeon 5150 @2.66 gigs, 3 gigs
>>>>>> of RAM.
>>>>>
>>>>> Is that normal? Do you usually run 40 concurrent backups?
>>>
>>>
>>>
>>>> We don't always have that many going, but from 5:00 Pm to 8:00 or
>>>> 9:00 ish we can have up to 45. We don't have any problems with our
>>>> backups unless a restore is running at the same time.
>>>
>>> So you're asking if doing a restore while there are 45 backup jobs
>>> are running might tax the CPU. The answer is yes.
>>>
>>> How much this will tax the CPU depends upon too many factors to
>>> discuss in this scope.
>>>
>>>> Is it possible that the de-compression that the Bacula server is
>>>> performing is putting too much load on the CPU (combined with the
>>>> backups)?
>>>
>>> Decompression? By bacula-dir? There is no decompression done there
>>> AFAIK. Sometimes bacula-fd can do decompression if you are using
>>> software decompression.
>
>
>> One of the problems we run into is when we try to restore a server
>> with a large amount of HDD space used (well, just around 40 gigs
>> total for 2 disks), it takes all day to complete and goes into the
>> night when the backups are being completed. By all day I mean from
>> 9:00 am until 8:00 PM. Should a restore of that size take so long?
>> Is there a way to speed up restores so that they don't take all day?
>> If we wanted to restore a system with 100 gig used HDD space, what
>> could we do to speed up the process, if anything? We don't have
>> backups running from 8 AM to 5 PM except for a few stragglers from
>> the night before (4 at the most, not an issue for us).
>
> Please remember to cc the mailing list.
>
> I am not following the above. Let's keep it simple.
>
> The length of time it takes to restore data depends on many things:
>
> - how fast your hard drives are
> - how much the system is being used at the time
> - how fast your network is (for sending data from bacula-sd to
> bacual-fd where it is restored)
> - are these many small files or many large files? Writing 10GB of
> data in 10 files takes far less time than writing the same amount of
> data in 100,000 files.
>
> Those are the just the things I can think of now.
>
> If you can paste us the output of one of these jobs, that will give us
> some context.
>
> --Dan Langille
> http://langille.org/
>
>
>
>
>
-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users
|