ANS1329S Server out of data storage space

flanker

ADSM.ORG Member
Joined
Dec 25, 2006
Messages
30
Reaction score
2
Points
0
Hi I have a problem with "ANS1329S Server out of data storage space" in TSM 5.4.0.3 server in TS3100 where i have 4 tapes LTO3 empty in one sec. stgpool.
The are labeled with barcode and checked in first as scratch and then when assigned to the stgpool became private but empty.Then i try to backup it shows this massage.I have automation with admin. schedule in 00:15 with command update stgpool BACKUPPOOL nextstgpool=SREDA ACCess=READWrite and client schedule in 02:30.Should they become scratch? But as i know when i put any tape in some stgpool they become private, right?
 
Your question is not very clear, but few things to check

1. Do a 'q stg f=d' for the storage pool in question and look at the value of the 'max allowed scratch tape' parameter. if there is nothing or if the storage pool has reached the max scratch value, you need to increase.

2. Look at the TSM server atcivity log at the same time when you try to run the backup from the client. It will have more info that will tell you what may be the problem.
 
Thanx Alimirza for reply,
The thing is that although the 4 tapes are empty, and 400GB each and the backup files are 90 GB - the storage out of data space - happens. I know that the tapes should be scratch so you can write data on them, but the are private,empty,readwrite. Maxscratch is 5, but i dont think this is it.I doesnt make them scratch - i think this is it. I have standalone TSM 5.4.0.3. Adn i have
this from dsmerror.log
07/18/2007 02:35:39 ANS1999E Incremental processing of '\\xxxx\z$\Backup\xxxxx\Data\Wednesday\*' stopped.
07/18/2007 02:35:39 ANS1329S Server out of data storage space
07/18/2007 02:35:39 ANS1512E Scheduled event 'SREDA' failed. Return code = 12.
 
Increasing the MAXSCRATCH of the stgpool to a bigger number should solve your problem.
 
Label them again. ex.

label libvol <library name> search=y labels=b checkin=scr overwrite=yes.

If TSM would like use tape, it will change tape from scrach to private.
 
Personally I would ignore the last 2 posts because when you said:

"then when assigned to the stgpool became private but empty"

...I take it you are defining the tapes into the pool with "def vol"? This sounds perfectly fine to me - tsm will use those private/empty volumes first before it tries to get a scratch so scratch tapes are not the problem here.

Check the actlog (q actlog) on the server when you get the client error and see what it says, it will probably tell you which pool is the issue.

Is the pool you are putting the data in set to access=readwrite (see q stgpool <pool> f=d).

Also, are you sure the client is sending data to this pool? I suspect it could be a different pool that is full? Check your other disk and tape pools to see if those have a capacity issue. For example, it could be your directory pool (where DIRMC sends your directories to).
 
Guys thax very much on your support and help i appreciate it alot.
I have first moved them from stgpools, relabeled and recheckedin the tapes and they work good.

THANK YOU - THIS TSM COMMUNITY RULES
 
Hello,
I''m doing a backup and my client show me this msg:
Code:
08/10/2013 12:48:38 Directory-->                   0 \\osaka\l$\Fixas\Andar0201\Dados [Sent]      08/10/2013 12:48:38 Directory-->                   0 \\osaka\l$\Fixas\Andar0201\Indices [Sent]      
08/10/2013 12:48:39 Normal File-->         3,698,190 \\osaka\l$\Fixas\Andar0201\Dados\20130705_19.dar  ** Unsuccessful **
08/10/2013 12:48:39 ANS1114I Waiting for mount of offline media.
08/10/2013 12:48:40 Retry # 1  Directory-->               4,096 \\osaka\l$\ [Sent]      
08/10/2013 12:48:40 Retry # 1  Directory-->                   0 \\osaka\l$\$RECYCLE.BIN [Sent]      
08/10/2013 12:48:40 Retry # 1  Directory-->                   0 \\osaka\l$\Fixas [Sent]      
08/10/2013 12:48:40 Retry # 1  Directory-->                   0 \\osaka\l$\Moveis 214 [Sent]      
08/10/2013 12:48:40 Retry # 1  Directory-->                   0 \\osaka\l$\Moveis 233 [Sent]      
08/10/2013 12:48:40 Retry # 1  Directory-->                   0 \\osaka\l$\System Volume Information [Sent]      
08/10/2013 12:48:40 Retry # 1  Directory-->                   0 \\osaka\l$\$RECYCLE.BIN\S-1-5-21-2428313617-738608896-1602740665-1003 [Sent]      
08/10/2013 12:48:40 Retry # 1  Directory-->                   0 \\osaka\l$\$RECYCLE.BIN\S-1-5-21-3460880400-876166238-3356493510-500 [Sent]      
08/10/2013 12:48:40 Retry # 1  Normal File-->               129 \\osaka\l$\$RECYCLE.BIN\S-1-5-21-2428313617-738608896-1602740665-1003\desktop.ini [Sent]      
08/10/2013 12:48:40 Retry # 1  Directory-->                   0 \\osaka\l$\$RECYCLE.BIN\S-1-5-21-3460880400-876166238-3356493510-500\$RI9I131 [Sent]

I believe my problem this are happen because my storage pool DSK_365d_COFRE is full
Code:
tsm: TSM_ALESC>q stg

Storage        Device         Estimated      Pct      Pct    High    Low    Next Stora-
Pool Name      Class Name      Capacity     Util     Migr     Mig    Mig    ge Pool
                                                              Pct    Pct
-----------    ----------    ----------    -----    -----    ----    ---    -----------
DSK_30D        DISK             2,400 G      0.9      0.4      90     70    TAPE_30D
DSK_365D_C-    DISK             2,000 G    100.0    100.0      90     30    TAPE_365D_-
 OFRE                                                                        COFRE
TAPE_30D       DEVCL_LTO       71,418 G     20.5     39.0      50     30
TAPE_365D_-    DEVCL_LTO       58,691 G     87.1     93.4      90     70
 COFRE

When i use the comand line to empty the pool nothin happend!
Code:
tsm: TSM_ALESC>mig stg DSK_365D_Cofre lo=0ANR2110I MIGRATE STGPOOL started as process 69.
ANR1000I Migration process 69 started for storage pool DSK_365D_COFRE manually, highMig=90,
lowMig=0, duration=No.
ANR2110I MIGRATE STGPOOL started as process 70.
ANR1000I Migration process 70 started for storage pool DSK_365D_COFRE manually, highMig=90,
lowMig=0, duration=No.


tsm: TSM_ALESC>q proc


 Process     Process Description      Status
  Number
--------     --------------------     -------------------------------------------------
      69     Migration                Disk Storage Pool DSK_365D_COFRE, Moved Files: 0,
                                       Moved Bytes: 0, Unreadable Files: 0, Unreadable
                                       Bytes: 0. Current Physical File (bytes):
                                       178.536.448 Waiting for mount of output volume
                                       CPD216L4 (22 seconds).


tsm: TSM_ALESC>q proc
ANR0944E QUERY PROCESS: No active processes found.
ANS8001I Return code 11.

This is my dsmerror.log
Code:
08/10/2013 12:50:20 ANS1228E Sending of object '\\osaka\l$\Fixas\Andar0201\Dados\20130705_24.dar' failed08/10/2013 12:50:20 ANS1329S Server out of data storage space


08/10/2013 12:50:20 ANS1329S Server out of data storage space


08/10/2013 12:50:20 ANS1512E Scheduled event 'OSAKA_FULL_COFRE' failed.  Return code = 12.

I dont know what else i can do because i still have 5 scratch tapes.
 
Last edited:
Thats my active log when i try mig my data to tape

Thats my active log when i try mig my data to tape
Code:
ANR0985I Process 75 for MIGRATION running in the BACKGROUND completed with completion state FAILURE at
 14:38:53. (SESSION: 1776, PROCESS: 75)
ANR1002I Migration for storage pool DSK_365D_COFRE will be
tinue, 'C' to cancel)


 retried in 60 seconds. (SESSION: 1776)
ANR8336I Verifying label of LTO volume CPD216L4 in drive
 DRIVE02 (\\.\Tape0). (SESSION: 1776, PROCESS: 75)
ANR2017I Administrator ADMIN issued command: QUERY PROCESS
  (SESSION: 1776)
ANR0944E QUERY PROCESS: No active processes found.
 (SESSION: 1776)
ANR2017I Administrator ADMIN issued command: ROLLBACK
 (SESSION: 1776)
ANR8468I LTO volume CPD216L4 dismounted from drive DRIVE02
 (\\.\Tape0) in library LIB01. (SESSION: 1776, PROCESS:
 75)
ANR1003I Migration retry delay ended; checking migration
 status for storage pool DSK_365D_COFRE. (SESSION: 1776)
ANR0984I Process 77 for MIGRATION started in the
 BACKGROUND at 14:39:53. (SESSION: 1776, PROCESS: 77)
ANR2110I MIGRATE STGPOOL started as process 77. (SESSION:
 1776, PROCESS: 77)
ANR1000I Migration process 77 started for storage pool
 DSK_365D_COFRE manually, highMig=90, lowMig=0,
 duration=No. (SESSION: 1776, PROCESS: 77)

Code:
tsm: TSM_ALESC>q stg

Storage        Device         Estimated      Pct      Pct    High    Low    Next Stora-
Pool Name      Class Name      Capacity     Util     Migr     Mig    Mig    ge Pool
                                                              Pct    Pct
-----------    ----------    ----------    -----    -----    ----    ---    -----------
DSK_30D        DISK             2,400 G      1.3      1.3      90     70    TAPE_30D
DSK_365D_C-    DISK             2,000 G    100.0    100.0      90     30    TAPE_365D_-
 OFRE                                                                        COFRE
TAPE_30D       DEVCL_LTO       71,418 G     20.5     39.0      50     30
TAPE_365D_-    DEVCL_LTO       58,691 G     87.1     93.4      90     70
 COFRE
 
Q STG F=D for the NEXT storage pool of DSK_365D_COFRE. Is MAXSCRATCHUSED<=MAXSCRATCH.
 
Q STG F=D for the NEXT storage pool of DSK_365D_COFRE. Is MAXSCRATCHUSED<=MAXSCRATCH.
I will put both:
Code:
              Storage Pool Name: DSK_365D_COFRE
              Storage Pool Type: Primary
              Device Class Name: DISK
ore...   (<ENTER> to continue, 'C' to cancel)
             Estimated Capacity: 2,100 G
             Space Trigger Util: 95.2
                       Pct Util: 95.2
                       Pct Migr: 95.2
                    Pct Logical: 100.0
                   High Mig Pct: 90
                    Low Mig Pct: 30
                Migration Delay: 0
             Migration Continue: Yes
            Migration Processes: 2
          Reclamation Processes:
              Next Storage Pool: TAPE_365D_COFRE
           Reclaim Storage Pool:
         Maximum Size Threshold: No Limit
                         Access: Read/Write
                    Description: Storage Pool em Disco - Armazenar dados até 30 dias de retençao
              Overflow Location:
          Cache Migrated Files?: Yes
                     Collocate?:
          Reclamation Threshold:
      Offsite Reclamation Limit:
Maximum Scratch Volumes Allowed:
 Number of Scratch Volumes Used:
  Delay Period for Volume Reuse:
         Migration in Progress?: No
           Amount Migrated (MB): 0.00
lapsed Migration Time (seconds): 56
       Reclamation in Progress?:
 Last Update by (administrator): ADMIN
          Last Update Date/Time: 05/02/2012 15:15:47
       Storage Pool Data Format: Native
           Copy Storage Pool(s):
            Active Data Pool(s):
        Continue Copy on Error?: Yes
                       CRC Data: No
               Reclamation Type:
    Overwrite Data when Deleted:

              Storage Pool Name: TAPE_365D_COFRE
              Storage Pool Type: Primary
              Device Class Name: DEVCL_LTO
             Estimated Capacity: 58,691 G
             Space Trigger Util:
                       Pct Util: 87.1
                       Pct Migr: 93.4
                    Pct Logical: 100.0
                   High Mig Pct: 90
                    Low Mig Pct: 70
                Migration Delay: 0
ore...   (<ENTER> to continue, 'C' to cancel)
             Migration Continue: Yes
            Migration Processes: 1
          Reclamation Processes: 1
              Next Storage Pool:
           Reclaim Storage Pool:
         Maximum Size Threshold: No Limit
                         Access: Read/Write
                    Description:
              Overflow Location:
          Cache Migrated Files?:
                     Collocate?: No
          Reclamation Threshold: 60
      Offsite Reclamation Limit:
Maximum Scratch Volumes Allowed: 40
 Number of Scratch Volumes Used: 56
  Delay Period for Volume Reuse: 0 Day(s)
         Migration in Progress?: No
           Amount Migrated (MB): 0.00
lapsed Migration Time (seconds): 0
       Reclamation in Progress?: No
 Last Update by (administrator): ADMIN
          Last Update Date/Time: 07/10/2013 15:06:21
       Storage Pool Data Format: Native
           Copy Storage Pool(s):
            Active Data Pool(s):
        Continue Copy on Error?: Yes
                       CRC Data: No
               Reclamation Type: Threshold
    Overwrite Data when Deleted:
Code:
 TSM_ALESC>select (MAXSCRATCH - NUMSCRATCHUSED) AS Scratches_Left FROM STGPOOLS WHERE STGPOOL_NAME='TAPE_365D_COFRE'
TCHES_LEFT
----------
       -16
 
Last edited:
Here is the problem:
Storage Pool Name: TAPE_365D_COFRE
Maximum Scratch Volumes Allowed: 40
Number of Scratch Volumes Used: 56
 
Thank u , i increase the maxscratch tapes and done!! its migrating now
Thenk u very much u guys are the best!!!
 
Guys thax very much on your support and help i appreciate it alot.
I have first moved them from stgpools, relabeled and recheckedin the tapes and they work good.

THANK YOU - THIS TSM COMMUNITY RULES

Hi, how did you do this? im still having the same issue. thank you in advance.
 
Increase maxscratch

Hi, how did you do this? im still having the same issue. thank you in advance.

First:
select (MAXSCRATCH - NUMSCRATCHUSED) AS Scratches_Left FROM STGPOOLS WHERE STGPOOL_NAME='Name_OF_THE_STGPOOL'


SCRATCHES_LEFT
--------------
-36

SECOND:

U need increase this number
UPDate STGpool TAPE_365D_COFRE MAXSCRATCH = number_of_VOL

EXE: UPDate STGpool TAPE_365D_COFRE MAXSCRATCH = 70
 
Back
Top