I am trying to move a TSM instance to new hardware. The old hardware was running RHEL 7.9 and the new is running RHEL 8.4. The server is now getting on in life (it's circa 12 years old), and while it is currently fine 2/3's of the servers of the same model we had are now dead; the display goes and just show multicoloured snow, so new hardware is called for.
The data is on a bunch of 24 drives in 4U disk shelves and the server is doing software RAID6 with shelf level redundancy. There are basically 23 arrays mounted /backup/disk00, /backup/disk01 (one disk per shelf per array) and I store the data on preallocated sequential files. The last drive in each shelf is a hot spare, and the disk22 is used both for backing up the DB etc. and as an NFS share for Spectrum Protect Plus to backup our VM's.
Yesterday I shutdown the old server after doing a DB backup, saving the volume history and device configuration etc. and swapped the server out and cabled up the disk shelves to the new server. Full set of new cables as we needed new SAS cards for RHEL8 and they have SFF-8644 ports. I am doing dm-multipath this time with dual connections to each shelf this time too.
I can see all the RAID6 arrays, they are correctly assembled through dm-multipath and mounted. I have got a 1TB NVME RAID1 disk for the database up and running. I have installed Spectrum Protect 8.1.12 and setup an instance with the same name.
However I cannot see how one restores the DB into the new instance using DSMSERV RESTORE DB. How does it know where the volumes are? I realize now in 16 years of running TSM servers I have never had to do this post 6.x Googling around produces no useful hits. I have a feeling now that one should have created a DRM plan? I guess I could start the instance on the old server and dump one out even if the disks holding the actual data are not attached? I guess I might be able to NFS export them back to the original server?
I still have access to the old server, though it doesn't have the disks attached anymore.
The data is on a bunch of 24 drives in 4U disk shelves and the server is doing software RAID6 with shelf level redundancy. There are basically 23 arrays mounted /backup/disk00, /backup/disk01 (one disk per shelf per array) and I store the data on preallocated sequential files. The last drive in each shelf is a hot spare, and the disk22 is used both for backing up the DB etc. and as an NFS share for Spectrum Protect Plus to backup our VM's.
Yesterday I shutdown the old server after doing a DB backup, saving the volume history and device configuration etc. and swapped the server out and cabled up the disk shelves to the new server. Full set of new cables as we needed new SAS cards for RHEL8 and they have SFF-8644 ports. I am doing dm-multipath this time with dual connections to each shelf this time too.
I can see all the RAID6 arrays, they are correctly assembled through dm-multipath and mounted. I have got a 1TB NVME RAID1 disk for the database up and running. I have installed Spectrum Protect 8.1.12 and setup an instance with the same name.
However I cannot see how one restores the DB into the new instance using DSMSERV RESTORE DB. How does it know where the volumes are? I realize now in 16 years of running TSM servers I have never had to do this post 6.x Googling around produces no useful hits. I have a feeling now that one should have created a DRM plan? I guess I could start the instance on the old server and dump one out even if the disks holding the actual data are not attached? I guess I might be able to NFS export them back to the original server?
I still have access to the old server, though it doesn't have the disks attached anymore.