TSM 6.3.5.1 on Redhat 5.8 to TSM 7.1.5 on Redhat 7.2

moon-buddy

ADSM.ORG Moderator
Joined
Aug 24, 2005
Messages
7,010
Reaction score
406
Points
0
Location
Somewhere in the US
I just want to run this by the forum to see if there is a better way of moving my old TSM 6.3.5.1 on Redhat 5.8 to a new TSM 7.1.5 server running Redhat 7.2.

As everyone knows, restoring TSM DB from one TSM version to another is not possible. Thus the only way is to do 'export server' using server-to-server method.

What I am thinking to do:

- is build the new TSM 7.1.5 on Redhat 7.2
- run 'export server filedata=all ...' from the old to the new TSM servers (this command transfers all filedata information from the old to the new)
- once all is done, swing the back end data storage (which in my case are Data Domains) over to the new server, rename the new TSM to the old TSM server and let the nodes start backing up

Does this sound right?

Caveats:

- TSM 6.3.5.1 cannot be installed on Redhat 7.2 - IBM does not support this
- I cannot upgrade the old TSM 6.3.5.1 to run on Redhat 6.x - Redhat does not support an upgrade in place

If only I can install 6.3.5.1 on Redhat 7.2, a restore of the TSM 6.3.5.1 DB is possible and then update to TSM 7.1.5.

Comments?
 
If time is on your side, I'd do the export/import one node (or group of node) at a time over a longer period. Gives you less downtime. But, you'd have to touch every node to point it to the new server. Where as with your method, you don't need to touch the clients, well you may need to re-enter the password if the client detects a change.

TSM 6.3.5.1 cannot be installed on Redhat 7.2 - IBM does not support this
While this is true, it would be short lived. So in theory, you could do:
  1. - backup DB on original server
  2. - install 6.3.5.1 on new server
  3. - restore db
  4. - upgrade to 7.1.5
  5. - if everything is ok, move storage and be done
You would be in trouble if you have problems with step 2 or 3. It's something you could test ahead of time if you are comfortable with the risk.

And if it fails, you can always revert to your original plan, because you are not modifying the original server.
 
Marclant,

This was my original plan:

While this is true, it would be short lived. So in theory, you could do:
  1. - backup DB on original server
  2. - install 6.3.5.1 on new server
  3. - restore db
  4. - upgrade to 7.1.5
  5. - if everything is ok, move storage and be done
but TSM 6.3.5.1 WOULD NOT even install on Redhat 7.2. Thus the export route.
 
but TSM 6.3.5.1 WOULD NOT even install on Redhat 7.2
It would be a bit of work, but less data movement. Could you use an temporary machine with Linux 6.x.
  1. backup db on original server to a file devclass
  2. Install 6.3.5.1 on temp machine
  3. restore the DB on temp machine
  4. make sure that you have nomigrecl, expinv 0 and disable scheds in the opt file
  5. upgrade to 7.1 on temp machine
  6. backup the DB on temp machine to a file devclass
  7. restore DB on new server
  8. flip storage from original to use server.
 
Correct me if I am wrong: using 'export server filedata=all' ... does not actually move node data to the target TSM server but only the client definitions, client backup information, admin schedules, etc. The real data still sits on the tape (PTL or VTL).

So, if this is the case, then I can use 'export server filedata=all ...' and then do what I originally posted.

By the way, doing this:
It would be a bit of work, but less data movement. Could you use an temporary machine with Linux 6.x.
  1. backup db on original server to a file devclass
  2. Install 6.3.5.1 on temp machine
  3. restore the DB on temp machine
  4. make sure that you have nomigrecl, expinv 0 and disable scheds in the opt file
  5. upgrade to 7.1 on temp machine
  6. backup the DB on temp machine to a file devclass
  7. restore DB on new server
  8. flip storage from original to use server.

is an option I discarded since I need to move 9 physical TSM servers!
 
Filedata=all will export all the data for all the nodes. So you would need to have the storage on the target to accept all the data being exported by the source.
FILEData
Specifies the type of files to export for all nodes defined to the server. This parameter is optional. The default value is NONE.

If you are exporting to sequential media: The device class to access the file data is determined by the device class for the storage pool. If it is the same device class specified in this command, Tivoli® Storage Manager requires two drives to export server information. You must set the mount limit for the device class to at least 2.

The following descriptions mention active and inactive backup file versions. An active backup file version is the most recent backup version for a file that still exists on the client workstation. All other backup file versions are called inactive copies. The values are:
ALL
Tivoli Storage Manager exports all backup versions of files, all archived files, and all files that were migrated by a Tivoli Storage Manager for Space Management client.

None
Tivoli Storage Manager does not export files, only definitions
source: http://www.ibm.com/support/knowledg...rence/r_cmd_server_othsvr_export.html?lang=en
 
Is there a way to forklift the client definitions, backup info and versions, admin schedules, scripts, etc from one TSM server to another using the 'export' option?

The idea is to forklift node and admin info, etc and start backing up on TSM 7.1.5 moving forward while the backup data on TSM 6.3.5.1 ages.
 
Is there a way to forklift the client definitions, backup info and versions, admin schedules, scripts, etc from one TSM server to another using the 'export' option?
Yes, that's what you get with filedata=none, you get everything except the data.

By default, you get this:
  • Policy domain definitions
  • Policy set definitions
  • Management class and copy group definitions
  • Schedules defined for each policy domain
  • Administrator definitions
  • Client node definitions
 
That is what I thought with filedata=none.

If all relevant information has been passed on, why is it that the backup data cannot be accessed? My back end is of devclass=file and if truly all definitions have been exported, why would the data be not visible to the new TSM server when I swing over the storage to the new TSM server?

If data is not available, then it simply means that this is not a true export. Meta data - which resides on the TSM DB - is not passed on.

I have not played or worked long enough with export that is why I need to cross all my Ts and dot my Is.
 
Because the backups table is empty. The node definitions are there, but not the "metadata" for lack of a better term, even the filespaces won't be there.

You are basically looking for a way to export the pointers to the data without exporting the data, but that's not an option.
 
I'm in a similar situation.

I'm running TSM 6.3.2.200 on RHEL 5.11 on old hardware.

I want to end up running TSM 7.1.5.100 on RHEL 7.2 on new hardware.

I was hoping I could install TSM 7.1.5.100 on the old RHEL 5.11 server to upgrade to a TSM version supported on RHEL 7.x and then rsync storage pools etc etc over to the new hardware.

Does anyone know if that is possible or do I have to go through a temp RHEL6 machine to upgrade the DB?
 
... or do I have to go through a temp RHEL6 machine to upgrade the DB?

This seems your only option. I looked at all possibilities including loading TSM 6.3.5.1 on Redhat 7.2 and no joy.

My case is a lot harder to accomplish since I have 7 physical servers to move.
 
Alas.. :(

At least I have some time to do the upgrade since I have to allow for a "final" rsync of all storage pool volumes from the old hardware to the new.
During that time I should be able to restore the database to the temp machine, upgrade to TSM v7.1.x.xxx, dump the database again and restore it on the new hardware.

Last time I did a rsync transfer of all our data (5 or 6 years ago when going from TSM 5.x to 6.x and to new hardware) I did have rsync use somewhere above 90MB/s over a 1Gbit connection by avoiding deltas, compression and running the native protocol (not over ssh), but I anticipate at least 500GB to 1TB worth of storage pool volumes changes (it's faster to transfer a whole 5GB stgp vol than waiting for rsync to calculate delta for even just a few byte changes) to accumulate since last success rsync.

I'll probably be back at a later point in time for a few recommendations to my plan - a few preflight tests would probably also be in order.

Good luck with your 7 servers..

/Jens, who'll just go sit in a corner cursing IBM for dropping RHEL5 support too soon..
 
Back
Top