Networker

Re: [Networker] Advice on migrating a large (2TB) AFTD storage node to new hardware

2011-07-13 08:15:38
Subject: Re: [Networker] Advice on migrating a large (2TB) AFTD storage node to new hardware
From: Michael Leone <Michael.Leone AT PHA.PHILA DOT GOV>
To: NETWORKER AT LISTSERV.TEMPLE DOT EDU
Date: Wed, 13 Jul 2011 08:16:08 -0400
Comments anyone? I have also opened a question case with EMC, to see what 
they have to say about it. Still waiting to hear, as question cases are 
lower priority than problem cases, of course.

> I hope I can explain this clearly ... This is all on NW 7.5.2, all on 
> Windows.
> 
> I have a 2 node MS Windows 2003 cluster.  (these are my main file 
shares) 
> We are - at the moment - backing up straight to tape (nightly 
> differentials, weekly fulls) All works well. Total size of savesets over 

> the course of 1 week is close to 3TB. 
> 
> Now, I need to migrate all of this to a new 2 node Win2008 cluster, and 
> instead of going straight to tape, keep a week's worth of backups on 
AFTD 
> devices (disk drives). And the weekly full backup takes around 20 hours. 

> No way can I do a full backup to tape, then a full restore to disk on 
the 
> new cluster, over the course of the migration window (a weekend).
> 
> So here's what I am thinking - I make the new cluster non dedicated 
> storage nodes, each with it's own 4.5 TB AFTD drive. I backup the old 
file 
> server virtual client (called NT_SAN1) to the AFTD device on the new 
> cluster (NEWFIL001). Then, I do a directed recover from AFTD device to 
the 
> larger disk drives of the nodes that will be hosting the new virtual 
> client resource (which will also be called NT_SAN1; basically, we will 
> recreate NT_SAN1 with same name, IP address, and shares on the new 
> cluster). That should be the quickest way to move all that data to the 
new 
> larger drives of the new cluster.
> 
> So: I have current NW client called NT_SAN1. I create 2 new storage node 

> clients, NEWFILE001 and NEWFILE002, each with their own 4.5TB AFTD 
> devices. I reconfigure NT_SAN1 to change it's storage node setting (on 
> Globals 2 of 2, in NMC) to be NEWFILE001. I run a backup job that backs 
up 
> NT_SAN1 to the AFTD device of NEWFILE001. Then I do a re-directed 
recover 
> from AFTD device to NEWFILE001 storage drive. This gets all data and 
NTFS 
> security and permissions from the old virtual client to the new physical 

> node.
> 
> Then we shut down the old cluster, and re-create the virtual client 
> resource (named NT_SAN1) on the new cluster, along with all of it's 
share, 
> permissions, etc. (hopefully there is a wizard to do all that, rather 
than 
> us doing it manually). As long as the virtual client resource NT_SAN1 
DNS 
> name and IP address match the NW client NT_SAN1 DNS name and IP Address, 

> NW should be none the wiser, and just continue to backup NT_SAN1 
happily.
> 
> We did something almost exactly similar a few weeks ago, moving a SQL 
> cluster. Since the cluster resource name and IP stayed the same, I 
didn't 
> need to make any NW changes to the NW client that corresponded to the 
> virtual cluster resource. As far as NW was concerned, nothing really 
> changed, since it still found a client with that same name and IP 
address 
> exactly where NW had been told to find it.
> 
> SO:
> 
> Thoughts on the plan as a whole?
> Changes need on the NW NT_SAN1 client should be limited to adding 
> NEWFILE001 as a storage node entry (to the existing entry of 
"curphyhost" 
> and "nsrserver"). That will allow the backup job to save NT_SAN1 to the 
> AFTD device of NEWFILE001.
> After the recover to NEWFILE001, all I will need to do is remove the 
> storage node name (i.e., go back to "curphyhost" and "nsrserver".
> 
> We're going to have to do this in 2 weeks. Reason for the rush is that 
we 
> are running out of disk space on NT_SAN1, so we need to move it to new 
> hardware with bigger drives.
> 
> Please feel free to ask questions; I realize it's a bit convoluted, and 
I 
> may not have explained it as well as I had hoped.
> 
> Thanks

To sign off this list, send email to listserv AT listserv.temple DOT edu and 
type "signoff networker" in the body of the email. Please write to 
networker-request AT listserv.temple DOT edu if you have any problems with this 
list. You can access the archives at 
http://listserv.temple.edu/archives/networker.html or
via RSS at http://listserv.temple.edu/cgi-bin/wa?RSS&L=NETWORKER