Advice on filesystem (file space) type change (ext4 to xfs) - TSM v7.1.3

xyzegg

ADSM.ORG Member
Joined
Jun 27, 2011
Messages
48
Reaction score
0
Points
0
Hi,

From the perspective of a TSM administrator, could you give me any advice on things that I should do before or after a client migrate data from ext4 to xfs of file spaces already stored in TSM server storage?

Will client update the type of file space?
What about rename the filespace to something else after expiring the data?

The filesystem name will remain the same;
Server hardware will change;
OS will change (Red Hat Linux to Oracle Linux);

Thanks,

xyzegg
 
Will client update the type of file space?
No.
What about rename the filespace to something else after expiring the data?
Not sure what you mean after expiring. But since the filesystem name will remain the same, you will need to rename the old filespace to something else before you start backing up to the new filesystem or else the backup will fail.
 
No.

Not sure what you mean after expiring. But since the filesystem name will remain the same, you will need to rename the old filespace to something else before you start backing up to the new filesystem or else the backup will fail.

marclant,


I did a lab:

dd if=/dev/zero of=/tmp/disk bs=1G count=10
losetup /dev/loop0 /tmp/disk
mkfs.ext4 /dev/loop0
mkdir /mnt/disk
mount -t ext4 /dev/loop0 /mnt/disk/

Copied some data to /mnt/disk...

dsmc incr /mnt/disk/
umount /mnt/disk
mkfs.xfs -f /dev/loop0
mount -t xfs /dev/loop0 /mnt/disk/

Copied the same data to /mnt/disk/ and I did another incremental...
dsmc incr /mnt/disk/

Output:

IBM Tivoli Storage Manager
Command Line Backup-Archive Client Interface
Client Version 7, Release 1, Level 2.0
Client date/time: 12/13/2016 19:41:21
(c) Copyright by IBM Corporation and other(s) 1990, 2015. All Rights Reserved.

Node Name: CENTOS-TEST
Session established with server BKPRJ1R: Linux/x86_64
Server Version 7, Release 1, Level 3.0
Server date/time: 12/13/2016 19:41:05 Last access: 12/13/2016 19:34:45


Incremental backup of volume '/mnt/disk/'
Updating--> 308 /mnt/disk/dsmerror.log [Sent]
Updating--> 6,871,861,248 /mnt/disk/dummy_file.out [Sent]
Directory--> 6 /mnt/disk/lost+found [Sent]
Successful incremental backup of '/mnt/disk/*'


Total number of objects inspected: 4
Total number of objects backed up: 1
Total number of objects updated: 2
Total number of objects rebound: 0
Total number of objects deleted: 0
Total number of objects expired: 0
Total number of objects failed: 0
Total number of objects encrypted: 0
Total number of objects grew: 0
Total number of retries: 0
Total number of bytes inspected: 6.39 GB
Total number of bytes transferred: 107 B
Data transfer time: 0.00 sec
Network data transfer rate: 0.00 KB/sec
Aggregate data transfer rate: 0.02 KB/sec
Objects compressed by: 0%
Total data reduction ratio: 100.00%
Elapsed processing time: 00:00:05


Why did it work? Maybe because It's a loop device... I don't know, maybe I will realize some inconsistencies more later.

But let us supose that It will not work. So, to solve this, I would rename the old filespace to something else before backing up the new filesystem.
The question is:

Will the backup data stored on old filespace remain active forever? Right? If so I have to expire it someday.
 
Why did it work?
Maybe there are more similarities to those filesystems than with other types. Did you happen to check query filespace before/after the changes? Check the activity log around the time of the backup to see if it mentions a filespace change. Might be worth doing more tests, if objects get updated as opposed to backed up, it will cut down the time for your first backup and make the first backup after the migration quicker. I'd do a restore test too after the first backup just to be on the safe side.

Will the backup data stored on old filespace remain active forever? Right? If so I have to expire it someday.
Inactivce data will expire, the data that is active will never expire. You could expire it with the client:
dsmc expire{/oldfilespace}/* -su=yes
That will mark all the active objects as inactive and they will expire once they hit the retention.

Or you could schedule to delete the filespace in X number of days, example for 30 days:
define schedule deletefilespace type=admin startdate=+30 cmd="delete filespace nodename /tmp"
 
Maybe there are more similarities to those filesystems than with other types. Did you happen to check query filespace before/after the changes?
Yes. First, I noticed that the filespace type was not updated, but a second test showed me the filespace type updated. I'll conduct the experiment again.

Check the activity log around the time of the backup to see if it mentions a filespace change.
Nothing relevant.

Might be worth doing more tests, if objects get updated as opposed to backed up, it will cut down the time for your first backup and make the first backup after the migration quicker. I'd do a restore test too after the first backup just to be on the safe side.
Until now, I did one restore and data was ok!


Inactivce data will expire, the data that is active will never expire. You could expire it with the client:
dsmc expire{/oldfilespace}/* -su=yes
That will mark all the active objects as inactive and they will expire once they hit the retention.

Or you could schedule to delete the filespace in X number of days, example for 30 days:
define schedule deletefilespace type=admin startdate=+30 cmd="delete filespace nodename /tmp"

Thanks for the info. You always help me a lot!
 
Back
Top