Hi --
Getting about 41.6/MBs and hoping for closer to the max (120MB). I
tried maximum file sizes of 5, 8, 12GB -- 12GB the best the others
where about 35/MBs. Any advise welcomed...should I look at max/min
block sizes?
most of the data is big, genetics data -- filesizes avg in the 500/MB
to 3-4/GB -- looking at a growth from 4TB to 15TB in the next 2 years.
run results and bacula-sd.conf and bacula-dir.conf below...
thanks
-- gary
Run:
===
Build OS: x86_64-redhat-linux-gnu redhat
JobId: 5
Job: Prodbackup.2011-11-29_19.32.42_05
Backup Level: Full
Client: "bacula-fd" 5.0.3 (04Aug10)
x86_64-redhat-linux-gnu,red
hat,
FileSet: "FileSetProd" 2011-11-29 19:32:42
Pool: "FullProd" (From Job FullPool override)
Catalog: "MyCatalog" (From Client resource)
Storage: "LTO-4" (From Job resource)
Scheduled time: 29-Nov-2011 19:32:26
Start time: 29-Nov-2011 19:32:45
End time: 29-Nov-2011 21:15:53
Elapsed time: 1 hour 43 mins 8 secs
Priority: 10
FD Files Written: 35,588
SD Files Written: 35,588
FD Bytes Written: 257,543,090,368 (257.5 GB)
SD Bytes Written: 257,548,502,159 (257.5 GB)
Rate: 41619.8 KB/s
Software Compression: None
VSS: no
Encryption: no
Accurate: no
Volume name(s): f03
Volume Session Id: 1
Volume Session Time: 1322622337
Last Volume Bytes: 257,740,342,272 (257.7 GB)
Non-fatal FD errors: 0
SD Errors: 0
FD termination status: OK
SD termination status: OK
Termination: Backup OK
bacula-sd.conf:
==========
Autochanger {
Name = Autochanger
Device = LTO-4
Changer Command = "/usr/libexec/bacula/mtx-changer %c %o %S %a %d"
Changer Device = /dev/changer
}
Device {
Name = LTO-4
Media Type = LTO-4
Archive Device = /dev/nst0
AutomaticMount = yes; # when device opened, read it
AlwaysOpen = yes;
RemovableMedia = yes;
RandomAccess = no;
Maximum File Size = 12GB
Autochanger = yes
Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"
Alert Command = "sh -c 'smartctl -H -l error %c'"
}
bacula-dir.conf:
==========
Job {
Name = "Prodbackup"
Client = bacula-fd
FileSet = "FileSetProd"
Schedule = "ScheduleProd"
Write Bootstrap = "/var/spool/bacula/%c.bsr"
Full Backup Pool = FullProd
Incremental Backup Pool = IncrProd
Differential Backup Pool = DiffProd
Storage = LTO-4
Type = Backup
Level = Incremental
Pool = IncrProd
Priority = 10
Messages = Standard
}
FileSet {
Name = "FileSetProd"
Include {
Options {
WildFile = "*.OLD"
WildFile = "*.o"
WildFile = "*.bak"
exclude = yes
}
File = /my/home/xxxxxxx
}
Exclude {
File = /my/home/tmp
}
}
Schedule {
Name = "ScheduleProd"
Run = Full 1st sun at 16:05
Run = Pool {
Name = FullProd
Label Format = "FullProd"
Pool Type = Backup
Recycle = yes # Bacula can automatically recycle Volumes
AutoPrune = yes # Prune expired volumes
Volume Retention = 10 years
Maximum Volume Jobs = 1
}
Pool {
Name = DiffProd
Label Format = "DiffProd"
Pool Type = Backup
Recycle = yes
AutoPrune = yes
Volume Retention = 40 days
Maximum Volume Jobs = 1
}
Pool {
Name = IncrProd
Label Format = "IncrProd"
Pool Type = Backup
Recycle = yes
AutoPrune = yes
Volume Retention = 10 days
Maximum Volume Jobs = 1
}
Differential 2nd-5th sun at 16:05
Run = Incremental mon-sat at 16:05
}
------------------------------------------------------------------------------
All the data continuously generated in your IT infrastructure
contains a definitive record of customers, application performance,
security threats, fraudulent activity, and more. Splunk takes this
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
_______________________________________________
Bacula-users mailing list
Bacula-users AT lists.sourceforge DOT net
https://lists.sourceforge.net/lists/listinfo/bacula-users
|