ADSM-L

Re: synthetic fullbackup

2002-12-18 08:44:30
Subject: Re: synthetic fullbackup
From: Halvorsen Geirr Gulbrand <gehal AT WMDATA DOT COM>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Wed, 18 Dec 2002 14:44:20 +0100
Hi Werner,
we might need some clearifying of your setup.
What is your server version?
Are you backing up to tape, or disk?

Generally I can say this:
If you are running TSM v. 5.x you have the possibility to use MOVE NODEDATA,
which moves data for one node to another storagepool (from tape to disk),
and then start your restore from the diskpool. It may sound strange, because
you move the data twice, but often, you have a delay between the time you
decide to restore, until you actually start the restore (f.ex. in a disaster
recovery situation, where you have to get new hardware, install OS + TSM
client software, before you start the restore). In this interval, you can
start to move data from tape to disk, and the subsequent restore will be
alot faster.
The other possibility is to use collocation by filespace. Different
filespaces from the same server will be collocated on different tapes,
enabling you to simultaneously start a restore for each filespace. This
helps reducing restore times.
Third option is using backupsets, which can be created just for active
files. Then you will have all active files on one volume.
Others may also have an opinion on best approach to solve this. I have just
pointed out some of TSM's features.

Rgds.
Geirr Halvorsen
-----Original Message-----
From: Schwarz Werner [mailto:Werner.Schwarz AT BEDAG DOT CH]
Sent: 18. december 2002 14:08
To: ADSM-L AT VM.MARIST DOT EDU
Subject: synthetic fullbackup


We are looking for a solution for the following problem:
During a restore of a whole TSM-client we found that the needed ACTIVE
backup_versions were heavy scattered around our virtual tape-volumes. This
was the main reason for an unacceptable long restore-time. Disk as a primary
STGPool is too expensive.
Now we are looking for methods to 'cluster together' all active
backup_versions per node without backing up the whole TSM-client every night
(like VERITAS NetbackUp). Ideally the full_backup should be done in the
TSM-server (starting with an initial full_backup, then combining the
full_backup and the incrementals from next run to build the next synthetic
full_backup and so on). We already have activated COLLOCATE. Has anybody
good ideas?
thanks,
werner

<Prev in Thread] Current Thread [Next in Thread>