Hi there,
I was trying to restore a huge data to local harddisk from
an offline backup harddisk. There are 36,225,746 files and
the data size is 1.7TB.
Steps to restore data:
1. Rebuild catalog using bscan
2. bconsole > restore > option 3 "Enter list of
comma separated JobIds to select" > enter {jobid} >
mark folder_to_be_restored
After entered "mark folder_to_be_restored", it took about
28 hours to show $ prompt and let me proceed to enter
"done". Not sure why took so long. Is it possible to do
some tuning? I could see there is a process "bacula-dir"
which occupied 4GB memory only. While postgres processes
are below 2GB memory. There are 7.7GB is free and 3.5GB
for cache.
System configuration:
OS: 2.6.35.14-106.fc14.x86_64
Mem: 16G
Partition1(for OS): RAID1
Partition2(for
data restoration): RAID1
Bacula:
5.0.3
Postgresql:
8.4.9-1.fc14.x86_64
#postgresql.conf
shared:2GB
effective_cache_size = 8GB
maintenance_work_mem=512MB
work_mem:128MB
wal_buffers = 8MB
checkpoint_segments = 64
checkpoint_timeout = 20min
checkpoint_completion_target = 0.9
synchronous_commit = off
#sysctl.conf
kernel.shmmax = 6442450944
kernel.shmall = 4194304
Best,
Keith