ADSM-L

Re: [ADSM-L] removing offsite data for a particular node

2016-06-04 05:32:50
Subject: Re: [ADSM-L] removing offsite data for a particular node
From: Maurice van 't Loo <maurice AT BACKITUP DOT NU>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Sat, 4 Jun 2016 11:31:01 +0200
Hallo Gary,

In addition to Thomas's method.
In case you don't have collocation on your copy stgpool, there is still a
method, but it will cost some space and a lot of time, but could be helpful
if you really need the space.

1. The node what is migrated to Amazon should be moved to a separate
collocation group. If possible, make the current collocation groups as big
as possible.
2. Change the collocation of the copypool from no to group. Be aware that
this will cost at least the amount of collocation groups as new filling
tapes.
3. Use "move nodedata" to move the data of the node within the same
copypool, so all data of this node will be save on it's own private set of
tapes.
4. Delete the copypool volumes.

If you are really stuck in space and you want to take the additional risk,
you can also look for the tapes with the majority of migrated data and
delete those. Then you need to rerun the backup stgpool to save the data
what was deleted too much.

Or in case you might have some spare tapes, but just no slots.... you can
always checkout copy tapes and fill the robot with new ones.

Good luck,
Maurice van 't Loo

2016-06-03 19:40 GMT+02:00 Thomas Denier <Thomas.Denier AT jefferson DOT edu>:

> If you are using collocation groups successfully, you could migrate all
> the nodes in a collocation group to Amazon and then execute "delete volume"
> commands for the copy pool volumes belonging to the group.  In this
> context, using collocation groups successfully means avoiding situations
> where data from two or more collocation groups ends up on the same tape
> volume. Such situations can occur because nodes were moved between groups
> or because the copy pool ran low on scratch volumes at some point. You can
> use the "query nodedata" command to figure out which volumes belong to each
> collocation group and to identify volumes split between groups.
>
> If the process described above is unsuitable, I think you could use the
> following process at multiple times during the migration process:
>
> 1.Use output from "query nodedata" to identify copy pool volumes with
> large amounts of data from nodes that have been migrated to Amazon.
> 2.Execute "delete volume" commands for the volumes identified in step 1.
> 3.Execute a "backup stgpool" command to write new copies of files the came
> from unmigrated nodes and got deleted in step 2.
> 4.Send the volumes written in step 3 to the vault.
> 5.Recall the volumes cleared in step 2.
>
> You will need to think very carefully about the recoverability
> implications. In particular, you will need to avoid having all of the
> offsite copies of specific files end up onsite at the same time. If space
> at the vault is very tight, this might entail the use of a temporary
> storage location separate from either the vault or your data center.
>
> Thomas Denier
> Thomas Jefferson University
>
> -----Original Message-----
> From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf 
> Of
> Lee, Gary
> Sent: Friday, June 03, 2016 11:19
> To: ADSM-L AT VM.MARIST DOT EDU
> Subject: [ADSM-L] removing offsite data for a particular node
>
> We are slowly moving our primary tsm data storage out into the amazon
> cloud.
>
> Since this is by definition off site, our off site tape pool can go away.
> At least that is the current thinking, and must happen because our 3494
> libraries go out of support next year.
>
> Given this, How, once a node's data is out in amazon, can I remove its
> data from the offsite pool.
> We are stretched very thin, the offsite library is full, and no chance of
> adding more slots.
>
> Any help appreciated.
> The information contained in this transmission contains privileged and
> confidential information. It is intended only for the use of the person
> named above. If you are not the intended recipient, you are hereby notified
> that any review, dissemination, distribution or duplication of this
> communication is strictly prohibited. If you are not the intended
> recipient, please contact the sender by reply email and destroy all copies
> of the original message.
>
> CAUTION: Intended recipients should NOT use email communication for
> emergent or urgent health care matters.
>

<Prev in Thread] Current Thread [Next in Thread>