-----Ursprüngliche Nachricht-----
Von: Schwarz Werner BI
Gesendet am: Freitag, 13. Dezember 2002 14:58
An: 'acit AT ATTGLOBAL DOT NET'
Cc: Schwarz Werner BI
Betreff: AW: BackupSet: Is there an efficient method to generate
backupsets
Hi Zlatko
thanks very much for the hints about backupsets. The effective problem we
are looking to solve is the following:
During a restore of a whole TSM-client we found that the needed ACTIVE
backup_versions were heavy scattered around our virtual tapevolumes (primary
tape stgpool). This was the main reason for an unacceptable long
restore-time.
Now we are looking for methods to 'cluster together' all active
backup_versions per node without backing up the whole TSM-client every night
(like VERITAS NetbackUp). These 'clustered' active backup_versions should be
the candidates during a normal restore. We already have activated COLLOCATE.
Do you have more nice ideas?
thanks,
werner
-----Ursprüngliche Nachricht-----
Von: Zlatko Krastev/ACIT [mailto:acit AT ATTGLOBAL DOT NET]
Gesendet am: Freitag, 13. Dezember 2002 13:17
An: ADSM-L AT VM.MARIST DOT EDU
Betreff: Re: BackupSet: Is there an efficient method to generate
backupsets
Werner,
on day_02 you will have 990 files still active from day_00 (bds010.1,
bds011.1, ..., bds999.1) + 10 files from day_01 (bds000.2, bds001.2, ...,
bds009.2) and your assumption is completely correct.
What you want (as far as I understood it) is to mimic copypool behavior
with backupsets and is not possible. If you are making this for only one
node go the backupset way (1 tape/day) and it might be fine for you. If
this is to be done for several nodes copypool is the right answer (few
tapes/day + 1 DB tape/day). Copypool also allows you to recover from bad
primary tape which you cannot accomplish with backupsets.
Zlatko Krastev
IT Consultant
Schwarz Werner <Werner.Schwarz AT BEDAG DOT CH>
Sent by: "ADSM: Dist Stor Manager" <ADSM-L AT VM.MARIST DOT EDU>
12.12.2002 18:15
Please respond to "ADSM: Dist Stor Manager"
To: ADSM-L AT VM.MARIST DOT EDU
cc:
Subject: BackupSet: Is there an efficient method to generate
backupsets
I need help, please:
Can me tell somebody to solve the problem described in the following
example?
---- begin example
assumption: I have 2 backup_versions
time: day_00
The 1st time I create a backupset, all 1000 active backup_versions are
consolidated on tape_01 {bds000.1,bds001.1, ... ,bds999.1}.
time: day_01
Incremental backup creates 10 newer backupversions {bds000.2,bds001.2, ...
,bds009.2}.
time. day_02
I create a 2nd backupset, all 1000 active backup_versions are consolidated
on tape_02 {bds000.2,bds001.2, ... ,bds009.2,bds010.1,bds011.1, ...
bds999.1}.
Question_1:
I suppose, that all 1000 versions on tape_02 are copied from the inventory
of incremental backup_versions. Is this true?
Question_2:
Is it possible to do the following:
copy {bds010.1,bds011.1, ... bds999.1} from tape_01
copy {bds000.2,bds001.2, ... bds009.2} from incremental backup_versions
this would be more efficient in my environment.
---- end example
Thanks everybody who can give me some useful comments
kind regards,
werner
|