backup and archive

aymen

Active Newcomer
Joined
Mar 4, 2008
Messages
8
Reaction score
0
Points
0
Hi,
We have Tivoli backups in my office and I am very much new this environment. we have SAP installed over here and this backup is done only for sap. we have offsite backup configured . Daily two tapes should go to offsite. My problem I am out of scratch tapes always. Recently I have added some 20 tapes to the library . Some time I get lot of vault retrieves and most of the time I am out of scratch tapes. Due to this I am not able to get off site back tapes which I have to send offsite. Due to this backup db is also affected if there is no scratch tapes backup db will not take place.

Can any one explain me to understand this setup and how can smoothly send the offsite backup.

I will be thank full if any one can help me.

Regards,
Aymen
 
you need new tapes before learning drm issues on tsm... who install and configure your tsm, ask them the procedures and try to learn everything from them, if you stuck in some cases we will appreciate to helping you...
 
Hi Nezih,
Thanks for your quick reply Ibm has configured our tivoli system , i have learned most of the things from them. I am just struck over here scratch tapes. Daily scripts are running fine backups are going well space reclamation,expiration working well but most of the tapes which offsite as vault has to come back as scratch that is not hppening if you can guide me or give some hint i will be thank full .
 
Hi,
Can any one help me in this matter. Thanks in advance
 
hi,

the volumes marked as "vault" will have to be set to "R/W" state again, before inserting them into the library. They will be available as "scratch" as long as they do not have any data on it, accordingly with the policy you have configured.

hope this helps
max
 
Hi,
Max thanks for your reply please find inserted the copy shows our policy configured. Now i have arrond 20 tapes in vault position i have checked the status of them most of them show 2% or 0% ulitilized. Please tell me how to see whcih tape belongs to which resource of vault position.
I have tried to scratch the tapes which are 0 % utilized it gives me error data exsists on the tape.

Many thanks for your help.
Aymen
 

Attachments

  • tivo.jpg
    tivo.jpg
    59.8 KB · Views: 25
hi,
sorry it's not enough information. You do not need to "scratch the tapes": you just have to set its state to R/W, should it be empty, and it will be available again. You may want to check the value of the "reuse" parameter, to see when (number of days) the volume will be available, since it has no data in it. Only then it will be available.
cheers
max
 
Hi Max thanks for answering. Can you please check this info below. As you said to check the reuse delay period. The info is for both disk pool and tape pool

tsm: SAPHAD_TSM>q stg prod_disk_pool f=d

Storage Pool Name: PROD_DISK_POOL
Storage Pool Type: Primary
Device Class Name: DISK
Estimated Capacity: 10 G
Space Trigger Util: 28.2
Pct Util: 28.2
Pct Migr: 28.2
Pct Logical: 100.0
High Mig Pct: 90
Low Mig Pct: 70
Migration Delay: 0
Migration Continue: Yes
Migration Processes: 1
Reclamation Processes:
Next Storage Pool: PROD_TAPE_POOL
Reclaim Storage Pool:
Maximum Size Threshold: No Limit
Access: Read/Write
Description: Production Systems Disk Pool
Overflow Location:
Cache Migrated Files?: No
Collocate?:
Reclamation Threshold:
Offsite Reclamation Limit:
Maximum Scratch Volumes Allowed:
Number of Scratch Volumes Used:
Delay Period for Volume Reuse:
Migration in Progress?: No
Amount Migrated (MB): 41,288.45
Elapsed Migration Time (seconds): -1,538,174
Reclamation in Progress?:
Last Update by (administrator): ADMIN
Last Update Date/Time: 05/14/06 12:23:40
Storage Pool Data Format: Native
Copy Storage Pool(s):
Continue Copy on Error?:
CRC Data: No
Reclamation Type:

tsm: SAPHAD_TSM>q stg prod_tape_pool f=d

Storage Pool Name: PROD_TAPE_POOL
Storage Pool Type: Primary
Device Class Name: LTO
Estimated Capacity: 98,208 G
Space Trigger Util:
Pct Util: 1.8
Pct Migr: 4.0
Pct Logical: 99.9
High Mig Pct: 90
Low Mig Pct: 70
Migration Delay: 0
Migration Continue: Yes
Migration Processes: 1
Reclamation Processes: 1
Next Storage Pool:
Reclaim Storage Pool:
Maximum Size Threshold: No Limit
Access: Read/Write
Description: Tape Pool for Production Systems
Overflow Location:
Cache Migrated Files?:
Collocate?: Group
more... (<ENTER> to continue, 'C' to cancel)

Reclamation Threshold: 60
Offsite Reclamation Limit:
Maximum Scratch Volumes Allowed: 100
Number of Scratch Volumes Used: 4
Delay Period for Volume Reuse: 0 Day(s)
Migration in Progress?: No
Amount Migrated (MB): 0.00
Elapsed Migration Time (seconds): 0
Reclamation in Progress?: No
Last Update by (administrator): ADMIN
Last Update Date/Time: 04/02/06 12:37:35
Storage Pool Data Format: Native
Copy Storage Pool(s):
Continue Copy on Error?:
CRC Data: No
Reclamation Type: Threshold
 
hi,
reuse parameter does not mean anything with diskpools. It is set to zero on your prod_tape_pool, which means that the volumes will return to scratch as soon as they have not valid data on it (which is good for you since you are running out of scratch volumes).
The problem here is that PROD_TAPE_POOL is a primary storage pool, and you can send tapes offsite only when coming from a COPYPOOL, which is not indicated above.
In other words the volumes can return to scratch in 2 cases: in the case of primary pools when reclamation process runs with apprpriate threshold, and in the case of copypool when after the copy stg process some of the offsite volumes have no valid data on it: after the reuse delay (which might be zero) they will return in scratch state as soon as you set them R/W.

hope this helps
max
 
Hi Max,
First of all thanks for giving me hints. Yesterday i have just noticed Reclamation runs at 11 AM in the morning. Actlog shows this message
ANR1163W. It shows volume still contains files which cannot be moved. This message repeats for quite number of volumes.

One more thing collocation by group is enabled on prod_tape_pool.

I have run this below select statment please have a look.

tsm: SAPHAD_TSM>select volume_name, pct_reclaim from volumes where pct_reclaim
60 order by 2,1 asc

VOLUME_NAME PCT_RECLAIM
------------------ -----------
A00015L3 60.9
A00033L3 61.9
A00005L3 69.7
A00012L3 83.8
A00011L3 84.9
A00029L3 87.4
A00019L3 93.1
A00027L3 97.7
A00025L3 99.2
A00026L3 99.3
101200L3 99.5
A00006L3 99.5
A00013L3 99.5
A00022L3 99.6
A00024L3 99.6
A00014L3 99.7
A00000L3 99.9
A00004L3 99.9
A00009L3 99.9
A00017L3 99.9
more... (<ENTER> to continue, 'C' to cancel)
A00018L3 99.9
A00028L3 99.9
 
hi aymen,
still i can't see where are all volumes coming from ... ("q vol" and "q libvo").
You have plenty of volumes at 99.x which are really eligible as coming back to scratch, i.e. they have little data on them. Why are they not getting scratch ? When running reclamation you should be able to move those data to other sequential volumes. To do that you either need another drive (1 for reading from and 1 to write to) or create a sequential file pool on which moving those data: your volumes will then get scratch and you can eventually move data from file pool back to tape pool.
How many drives do you have ? ("q drive" "q library").
Should you have just one drive your only chance is to do as i wrote above, e.g. you have to create and use a filepool.

hope this helps
max
 
Hi Max,
All these volumes are in vault position. I have two drives . we have configured offsite backup configured As i have shown our policy setup to you. I am not understanding why these vol's are not becoming scratch. please find below few of the results you have asked and also the todays reclaims actlog messages.


Code:
[B][U]q vol[/U][/B]
Volume Name Storage Device Estimated Pct Volume
Pool Name Class Name Capacity Util Status
------------------------ ----------- ---------- --------- ----- --------
/tsmfs/stgpools/archive- SAPLOG_DEV- DISK 1,000.0 25.4 On-Line
_logs_vol1 _DISKPOOL
/tsmfs/stgpools/devstgp- DEV_DISK_P- DISK 5,000.0 0.4 On-Line
ool_vol1 OOL
/tsmfs/stgpools/devstgp- DEV_DISK_P- DISK 5,000.0 1.1 On-Line
ool_vol2 OOL
/tsmfs/stgpools/prodstg- PROD_DISK_- DISK 5,000.0 0.0 On-Line
pool_vol1 POOL
/tsmfs/stgpools/prodstg- PROD_DISK_- DISK 5,000.0 0.0 On-Line
pool_vol2 POOL
/tsmfs/stgpools/saplog_- SAPLOG_PRO- DISK 2,048.0 71.2 On-Line
prod_vol1 D_DISKPOOL
/usr/tivoli/tsm/server/- SPACEMGPOOL DISK 8.0 0.0 On-Line
bin/spcmgmt.dsm
101200L3 PROD_OFFSI- LTO 819,200.0 0.4 Filling
TE_POOL
101202L3 DEV_TAPE_P- LTO 1,347,239 83.4 Filling
OOL .9
A00000L3 PROD_OFFSI- LTO 1,611,270 0.0 Filling
more... (<ENTER> to continue, 'C' to cancel)
TE_POOL .8
A00001L3 PROD_OFFSI- LTO 1,461,030 70.9 Filling
TE_POOL .0
A00003L3 PROD_OFFSI- LTO 819,200.0 54.6 Filling
TE_POOL
A00004L3 PROD_OFFSI- LTO 819,200.0 0.0 Filling
TE_POOL
A00005L3 DEV_TAPE_P- LTO 1,591,992 12.6 Full
OOL .2
A00006L3 PROD_OFFSI- LTO 819,200.0 0.5 Filling
TE_POOL
A00007L3 PROD_TAPE_- LTO 1,422,597 52.7 Filling
POOL .6
A00008L3 PROD_TAPE_- LTO 819,200.0 6.4 Filling
POOL
A00009L3 PROD_OFFSI- LTO 819,200.0 0.0 Filling
TE_POOL
A00010L3 PROD_OFFSI- LTO 819,200.0 79.8 Filling
TE_POOL
A00011L3 PROD_OFFSI- LTO 1,097,093 15.0 Filling
TE_POOL .3
A00012L3 PROD_OFFSI- LTO 1,118,539 16.1 Filling
TE_POOL .9
more... (<ENTER> to continue, 'C' to cancel)
A00013L3 PROD_OFFSI- LTO 951,975.3 0.5 Filling
TE_POOL
A00014L3 PROD_OFFSI- LTO 839,741.6 0.1 Filling
TE_POOL
A00015L3 PROD_OFFSI- LTO 819,200.0 5.2 Filling
TE_POOL
A00017L3 PROD_OFFSI- LTO 819,200.0 0.0 Filling
TE_POOL
A00018L3 PROD_OFFSI- LTO 1,091,944 0.0 Filling
TE_POOL .5
A00019L3 PROD_OFFSI- LTO 819,200.0 5.4 Filling
TE_POOL
A00021L3 PROD_TAPE_- LTO 819,200.0 20.6 Filling
POOL
A00022L3 PROD_OFFSI- LTO 819,200.0 0.3 Filling
TE_POOL
A00023L3 PROD_OFFSI- LTO 1,733,006 79.7 Full
TE_POOL .1
A00024L3 PROD_OFFSI- LTO 819,200.0 0.4 Filling
TE_POOL
A00025L3 PROD_OFFSI- LTO 819,200.0 0.7 Filling
TE_POOL
A00026L3 PROD_OFFSI- LTO 1,189,954 0.4 Filling
more... (<ENTER> to continue, 'C' to cancel)
TE_POOL .2
A00027L3 PROD_OFFSI- LTO 819,200.0 0.8 Filling
TE_POOL
A00028L3 PROD_OFFSI- LTO 1,428,560 0.0 Full
TE_POOL .4
A00029L3 PROD_OFFSI- LTO 1,618,300 11.8 Filling
TE_POOL .1
A00030L3 DEV_TAPE_P- LTO 1,621,393 80.5 Full
OOL .8
A00033L3 PROD_OFFSI- LTO 819,200.0 15.0 Filling
TE_POOL
A00065L3 PROD_TAPE_- LTO 851,308.8 82.9 Filling
POOL
 
[B][U]Q libvol[/U][/B]
 
Library Name Volume Name Status Owner Last Use Home Device
Element Type
------------ ----------- ---------- ---------- --------- ------- ------
LIB_3582 101202L3 Private SAPHAD_TSM Data 4,105 LTO
LIB_3582 A00002L3 Private SAPHAD_TSM DbBackup 4,098 LTO
LIB_3582 A00005L3 Private SAPHAD_TSM Data 4,101 LTO
LIB_3582 A00007L3 Private SAPHAD_TSM Data 4,104 LTO
LIB_3582 A00008L3 Private SAPHAD_TSM Data 4,096 LTO
LIB_3582 A00010L3 Private SAPHAD_TSM Data 4,102 LTO
LIB_3582 A00021L3 Private SAPHAD_TSM Data 4,108 LTO
LIB_3582 A00030L3 Private SAPHAD_TSM Data 4,097 LTO
LIB_3582 A00065L3 Private SAPHAD_TSM Data 4,100 LTO
 
[B][U]Q drmedia[/U][/B]
tsm: SAPHAD_TSM>q drm
Volume Name State Last Update Automated
Date/Time LibName
---------------- ----------------- ------------------- -------------
101200L3 Vault 03/05/08 19:11:38
A00000L3 Vault 03/05/08 19:11:38
A00001L3 Vault 03/05/08 19:11:38
A00003L3 Vault 03/08/08 10:22:47
A00004L3 Vault 03/05/08 19:11:38
A00006L3 Vault 03/05/08 19:11:38
A00009L3 Vault 03/05/08 19:11:38
A00010L3 Mountable 03/09/08 15:00:17 LIB_3582
A00011L3 Vault 03/05/08 19:11:38
A00012L3 Vault 03/05/08 19:11:38
A00013L3 Vault 03/05/08 19:11:38
A00014L3 Vault 03/05/08 19:11:38
A00017L3 Vault 03/05/08 19:11:38
A00019L3 Vault 03/05/08 19:11:38
A00022L3 Vault 03/05/08 19:11:38
A00023L3 Vault 03/08/08 10:23:57
A00024L3 Vault 03/05/08 19:11:38
A00025L3 Vault 03/05/08 19:11:38
A00026L3 Vault 03/05/08 19:11:38
A00027L3 Vault 03/05/08 19:11:38
A00028L3 Vault 03/05/08 19:11:38
A00029L3 Vault 03/05/08 19:11:38
A00033L3 Vault 03/08/08 10:25:29
A00002L3 Mountable 03/08/08 10:29:27 LIB_3582
A00032L3 Vault retrieve 03/08/08 10:26:39
 
 
 
[B][U]Q libarary[/U][/B]
 
tsm: SAPHAD_TSM>q library
Library Name: LIB_3582
Library Type: SCSI
ACS Id:
Private Category:
Scratch Category:
WORM Scratch Category:
External Manager:
Shared: Yes
LanFree:
ObeyMountRetention:
 
[B][U]Q drive[/U][/B]
 
tsm: SAPHAD_TSM>q drive
Library Name Drive Name Device Type On-Line
------------ ------------ ----------- ----------
LIB_3582 DRIVE_0 LTO Yes
LIB_3582 DRIVE_1 LTO Yes
 
[B][U]Todays Reclamtion actlog messages:[/U][/B]
 
[B][U]03/09/08 15:00:17 ANR2753I (RECLAMATION):contains files which could not be[/U][/B]
[U][B]more... (<ENTER> to continue, 'C' to cancel)[/B][/U]
[B][U]moved. (SESSION: 11194)[/U][/B]
[U][B]03/09/08 15:00:17 ANR1163W Offsite volume A00012L3 still contains files[/B][/U]
[U][B]which could not be moved. (SESSION: 11194, PROCESS: 829)[/B][/U]
[U][B]03/09/08 15:00:17 ANR2753I (RECLAMATION):ANR1163W Offsite volume A00012L3[/B][/U]
[U][B]still (SESSION: 11194)[/B][/U]
[U][B]03/09/08 15:00:17 ANR2753I (RECLAMATION):contains files which could not be[/B][/U]
[U][B]moved. (SESSION: 11194)[/B][/U]
[U][B]03/09/08 15:00:17 ANR1163W Offsite volume A00025L3 still contains files[/B][/U]
[U][B]which could not be moved. (SESSION: 11194, PROCESS: 829)[/B][/U]
[U][B]03/09/08 15:00:17 ANR2753I (RECLAMATION):ANR1163W Offsite volume A00025L3[/B][/U]
[U][B]still (SESSION: 11194)[/B][/U]
[U][B]03/09/08 15:00:17 ANR2753I (RECLAMATION):contains files which could not be[/B][/U]
[U][B]moved. (SESSION: 11194)[/B][/U]
[U][B]03/09/08 15:00:17 ANR1163W Offsite volume A00029L3 still contains files[/B][/U]
[U][B]which could not be moved. (SESSION: 11194, PROCESS: 829)[/B][/U]
[U][B]03/09/08 15:00:17 ANR1163W Offsite volume A00027L3 still contains files[/B][/U]
[U][B]which could not be moved. (SESSION: 11194, PROCESS: 829)[/B][/U]
[U][B]03/09/08 15:00:17 ANR1163W Offsite volume A00019L3 still contains files[/B][/U]
[U][B]which could not be moved. (SESSION: 11194, PROCESS: 829)[/B][/U]
[U][B]03/09/08 15:00:17 ANR2753I (RECLAMATION):ANR1163W Offsite volume A00029L3[/B][/U]
[U][B]still (SESSION: 11194)[/B][/U]
[U][B]03/09/08 15:00:17 ANR2753I (RECLAMATION):contains files which could not be[/B][/U]
[U][B]moved. (SESSION: 11194)[/B][/U]
[U][B]more... (<ENTER> to continue, 'C' to cancel)[/B][/U]
[B][U]03/09/08 15:00:17 ANR2753I (RECLAMATION):ANR1163W Offsite volume A00027L3[/U][/B]
[U][B]still (SESSION: 11194)[/B][/U]
[U][B]03/09/08 15:00:17 ANR2753I (RECLAMATION):contains files which could not be[/B][/U]
[U][B]moved. (SESSION: 11194)[/B][/U]
[U][B]03/09/08 15:00:17 ANR2753I (RECLAMATION):ANR1163W Offsite volume A00019L3[/B][/U]
[U][B]still (SESSION: 11194)[/B][/U]
[U][B]03/09/08 15:00:17 ANR2753I (RECLAMATION):contains files which could not be[/B][/U]
[U][B]moved. (SESSION: 11194)[/B][/U]
[U][B]03/09/08 15:00:17 ANR1163W Offsite volume A00033L3 still contains files[/B][/U]
[U][B]which could not be moved. (SESSION: 11194, PROCESS: 829)[/B][/U]
[U][B]03/09/08 15:00:17 ANR2753I (RECLAMATION):ANR1163W Offsite volume A00033L3[/B][/U]
[U][B]still (SESSION: 11194)[/B][/U]
[U][B]03/09/08 15:00:17 ANR2753I (RECLAMATION):contains files which could not be[/B][/U]
[U][B]moved. (SESSION: 11194)
[/B][/U]
 
Last edited by a moderator:
hi,
ok you may have done everything in a correct way. I can't see any specific error from the log i read.
I found a PMR in IBM, you can follow IC48152 (which has been resolved by release 5.4) should you have an earlier version.
You definitely want to have a look at the following, which really seems to match your issue: http://www-1.ibm.com/support/docview.wss?uid=swg1PK25385

In the meantime, if i were you, i'd run reclamation at threshold=99 by swapping files in a brand new created filepool, and you'll get soon a few tapes back as scratch.

Please let me know if you succeed to resolve your issue
cheers
max
 
In Progress state in q event

Hi,

For the below mentioned node takes 6 days to complete the incr backup.

But where I see lots of days it is In Progress state what is that In progress? I don't see that session running. Can anybody tel me what is that In Progress ??





12-02-2008 13:00:00 12-02-2008 14:17:50 TBO_SEV3_INC- NODEALMOND In Progress


13-02-2008 13:00:00 TBO_SEV3_INC- NODEALMOND Missed

14-02-2008 13:00:00 TBO_SEV3_INC- NODEALMOND Missed

15-02-2008 13:00:00 15-02-2008 16:54:30 TBO_SEV3_INC- NODEALMOND In Progress
 
hi,
sorry prasanve, i believe you should start another thread, because it has nothing to do with this thread.
cheers
max
 
Hi Max,
I have brought all the tapes which are showing 0% utilisation from vault location and moved the data with mova data command. Now i have 11 scratch tapes. I have few questions to be clarify.

  1. Why does my setup uses scratch tapes insted of using the exsisting tapes in the library.
  2. we have offsite back policy like daily,weekly,monthly and yearly. How to know which tape from vault location holding monthly, yearly backups.
  3. which is the command or script it is using to move the files for offsite tape "is it migrate command".
I am really sorry if i am troubling you.

Thanks and BestRegards,
aymen
 
hi,
1. you need to read about "collocation" into the admin guide: basically, tsm tries and write on already used tapes, unless you have collocation on. "By node" will write data from each node on its own tape (very tape consuming); "by group" will write data from a group of nodes on the same tape. If collocation is off, everything goes onto the same tape until it reaches end of volume. You might have chosen a different destination in each copy group, which would get same results as collocation by group/node (you can check by issuing "q copy F=D").

2. Well, i'm not sure you really need to know it

3. migration has got nothing to do with moving data from copypool. Basically, into a healthy environment, reclamation will get the job done, but you still have that issue, so a manual move data will return your tapes to scratch.

cheers
max
 
Back
Top