Hi List.
Heaven preserve us all from penny-pinching managers.
I've designed a really neat TSM configuration, with ½ my diskpools at my prime
site, ½ at second site. Primary tape is a local 3494 copypools are on the
offsite 3494. All this is tied together by logical SAN and LAN links that
encompass both datacentres.
The plan was for a second fibre to be installed between the sites by a diverse
route. Problem is we have the money to purchase the second link, but not to
run it. Ah the joys of working for government.
So, I've been asked to look at a config that uses server to server instead of
the SAN to tie the two sites together.
I know that with normal data, migrations are done by node, largest node first,
so that it all your data is from one node you can only use one tape drive to
migrate. Is this the same with server to server?
i.e. if my offsite TSM is only a receiver for copypool data from the primary
TSM, will I be restricted to one migration process? What if there is more than
one offsite pool defined on the offsite TSM?
Any other gotchas?
Would I be better off to split the workload between the two instances and have
them back-up each other?
Thanks
Steve.
Steve Harris
TSM Administrator
Queensland Health, Brisbane Australia
***********************************************************************************
This email, including any attachments sent with it, is confidential and for the
sole use of the intended recipients(s). This confidentiality is not waived or
lost, if you receive it and you are not the intended recipient(s), or if it is
transmitted/received in error.
Any unauthorised use, alteration, disclosure, distribution or review of this
email is prohibited. It may be subject to a statutory duty of confidentiality
if it relates to health service matters.
If you are not the intended recipients(s), or if you have received this e-mail
in error, you are asked to immediately notify the sender by telephone or by
return e-mail. You should also delete this e-mail message and destroy any hard
copies produced.
***********************************************************************************
|