ADSM-L

Re: Restore times

1999-12-05 18:29:32
Subject: Re: Restore times
From: Simon Watson <simon.s.watson AT SHELL.COM DOT BN>
Date: Mon, 6 Dec 1999 07:29:32 +0800
This is also why there is the ability in ADSM to have Collocation
enabled (even Filespace Collocation for really big systems).  This will
ensure that data for each node is spread over the minimum number of
tapes.  Restores are then a no brainer!

Regards,
Simon
----------
| From: payne /  mime, , , payne AT BERBEE DOT COM
| From: payne /  mime, , , payne AT BERBEE DOT COM
| To: ADSM-L /  mime, , , ADSM-L AT VM.MARIST DOT EDU
| Subject: Re: Restore times
| Date: Saturday, 04 December, 1999 12:09AM
|
| I have a question on this.  Won't running reclamation on your off site
| storage pools get the data to as few tapes as possible?  If yes then why
| would you run full backups weekly?  How could anyone with a large amount of
| servers 100 + that run 7 days a week afford to run full weekly backups?  The
| fact that you don't need to do this is one of the reasons why I see ADSM as
| a superior backup product to everything else that is out there.
|
| Kyle
| payne AT berbee DOT com
|
| -----Original Message-----
| From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU]On Behalf Of
| Nick Cassimatis
| Sent: Thursday, December 02, 1999 8:17 AM
| To: ADSM-L AT VM.MARIST DOT EDU
| Subject: Re: Restore times
|
|
| Well, I can tell you from experience that your restore time will be:
|
|      Data Transfer time
| +    Tape Mount Time (library movement or (yuck!) manual mounts
| +    Tape seek time
| __________________________
| =    Total restore time
|
| Using a copy pool, the active files for a node can easily be spread out
| over dozens and dozens of tapes.  The longest factor involved above is the
| Tape Mount Time.  You may have a tape with one 1k file on it, so data
| transfer time is effectively 0, but you still have to mount the tape and
| position it to read the file.
|
| Implementing a strategy to reduce the number of tapes needed to be mounted
| for the restore is the best (only??) way to reduce this factor
| significantly.  I implemented doing full backups on a weekly basis, and
| restore times for a large filesystem went from 18 hours to 20 minutes.
| Your mileage may vary, but I doubt you'll be too disappointed.
|
| My full backups have a seperate node name attached to a seperate Policy
| Domain.  At the copy group level of this, the Copy Mode is set to Absolute,
| which takes a full backup.  For a restore, I restore from this node first,
| then restore from the "normal" node for the updates since the last full
| backup.  As I see it, if you are restoring to 6 days after your last full
| backup, you should have no more than 7 tape mounts (one for the full, one
| for each day since).  As I said above, the improvements were dramatic.
|
| Nick Cassimatis
| nickpc AT us.ibm DOT com
|
| If you don't have the time to do it right the first time, where will you
| find the time to do it again?
|
<Prev in Thread] Current Thread [Next in Thread>