On Tue, January 4, 2011 7:55 am, Tom Sommer wrote:
> On Tue, January 4, 2011 12:59, Dan Langille wrote:
>> On 1/4/2011 5:00 AM, Tom Sommer wrote:
>>
>>> On Tue, January 4, 2011 03:15, Dan Langille wrote:
>>>
>>>> On 1/3/2011 12:57 PM, Tom Sommer wrote:
>>>>
>>>>
>>>>> I'm currently restoring 1.5 mill. files and it's taking forever.
>>>>>
>>>>>
>>>>>
>>>>> bacula-sd is using 100% CPU, disk IO is apparently low, so I assume
>>>>> it's a CPU issue.
>>>>>
>>>>> My machine has 16GB RAM and 2 CPUs:
>>>>>
>>>>>
>>>>>
>>>>> top - 18:54:53 up 75 days, 10:16, 3 users, load average: 1.00,
>>>>> 1.00,
>>>>> 1.00
>>>>> Tasks: 172 total, 1 running, 171 sleeping, 0 stopped, 0 zombie
>>>>> Cpu0 : 85.3%us, 1.0%sy, 0.0%ni, 12.7%id, 0.0%wa, 0.0%hi,
>>>>> 1.0%si,
>>>>> 0.0%st
>>>>> Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi,
>>>>> 0.0%si,
>>>>> 0.0%st
>>>>> Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi,
>>>>> 0.0%si,
>>>>> 0.0%st
>>>>> Cpu3 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi,
>>>>> 0.0%si,
>>>>> 0.0%st
>>>>> Cpu4 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi,
>>>>> 0.0%si,
>>>>> 0.0%st
>>>>> Cpu5 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi,
>>>>> 0.0%si,
>>>>> 0.0%st
>>>>> Cpu6 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi,
>>>>> 0.0%si,
>>>>> 0.0%st
>>>>> Cpu7 : 11.9%us, 0.0%sy, 0.0%ni, 88.1%id, 0.0%wa, 0.0%hi,
>>>>> 0.0%si,
>>>>> 0.0%st
>>>>> Mem: 16429812k total, 16345744k used, 84068k free, 1272k
>>>>> buffers Swap: 3124632k total, 184k used, 3124448k free,
>>>>> 5690688k
>>>>> cached
>>>>>
>>>>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>>>>> 4014 root 18 0 109m 19m 1264 S 100.3 0.1 135:07.23
>>>>> bacula-sd
>>>>>
>>>>>
>>>>>
>>>>> As you can see the process takes 100% CPU usage, on 1 core. Is
>>>>> there any way to make Bacula use all cores? or any other way to
>>>>> speed up the restore - By the looks of it, it could take days to
>>>>> restore the data.
>>>>
>>>> What stage of the restore is occurring? is it building the file
>>>> tree? Has the restore started?
>>>>
>>>
>>> It's runnning - sending the files to the server.
>>>
>>>
>>> 2 days in, it's completed 50GB of a total of ~220GB.
>>>
>>>
>>> Not really that impressive.
>>>
>>>
>>> My files are stored in blocks of 10GB.
>>>
>>
>> Sounds like spooling or database index issues. I'm guessing you are
>> using MySQL and there are no spool options on your job / bacula-sd.
>>
>> The database index issues have been discussed previously on this list.
>> You should be able to find them. Look at that first before thinking
>> about spooling.
>
> I'm thinking it might be due to compression? Does that make sense?
I don't know. But FYI: the SD does not do compression. The FD does.
--
Dan Langille -- http://langille.org/
------------------------------------------------------------------------------
Learn how Oracle Real Application Clusters (RAC) One Node allows customers
to consolidate database storage, standardize their database environment, and,
should the need arise, upgrade to a full multi-node Oracle RAC database
without downtime or disruption
http://p.sf.net/sfu/oracle-sfdevnl
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users
|