Help - Backups are failing

I tried to audit the virtual disk volume.

ANS8000I Server command: 'audit volume /opt/tivoli/tsm/stg01/0000cdfc.bfs'
ANR2017I Administrator ADMIN issued command: AUDIT VOLUME /opt/tivoli/tsm/stg01/0000cdfc.bfs
ANR2401E AUDIT VOLUME: Volume /opt/tivoli/tsm/stg01/0000cdfc.bfs is not defined in a storage pool.

Strange, I would have thought these disk mounts would be part of the storage pool as that is what they show.


QUERY VOLUME

Volume Name Storage Device Estimated Pct Volume
Pool Name Class Name Capacity Util Status
------------------------ ----------- ---------- --------- ----- --------
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 83.7 Full
000CAC5.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 60.5 Full
000CAC6.BFS
/opt/tivoli/tsm/stg01/0- FILE_STG01 FILEDEVC 2.0 G 100.0 Full
 
post results of this command

select * from volumes where volume_name='/opt/tivoli/tsm/stg01/0000cdfc.bfs'

The part in quotes is case sensitive, so if it displays in lower case in a q vol, then it should stay in lower case for the select statement.
 
wow you're up already - dedicated as myself :) You got as much sleep as myself then. Well mine was a sleepness night due to these issues.


ok this works. So they do exist somewhere in TSM


tsm: TSM_PROD01>select * from volumes where volume_name='/opt/tivoli/tsm/stg01/0000CAD9.BFS'
ANR2017I Administrator ADMIN issued command: select * from volumes where volume_name='/opt/tivoli/tsm/stg01/0000CAD9.BFS'

VOLUME_NAME: /opt/tivoli/tsm/stg01/0000CAD9.BFS
STGPOOL_NAME: FILE_STG01
DEVCLASS_NAME: FILEDEVC
EST_CAPACITY_MB: 2047.6
SCALEDCAP_APPLIED:
PCT_UTILIZED: 100.0
STATUS: FULL
ACCESS: READWRITE
PCT_RECLAIM: 0.0
SCRATCH: YES
ERROR_STATE: NO
NUM_SIDES: 1
TIMES_MOUNTED: 1
WRITE_PASS: 1
LAST_WRITE_DATE: 2009-12-28 12:56:52.000000
LAST_READ_DATE: 2009-12-28 12:56:52.000000
PENDING_DATE:
WRITE_ERRORS: 0
READ_ERRORS: 0
LOCATION:
MVSLF_CAPABLE: No
CHG_TIME: 2010-01-06 09:24:41.000000
CHG_ADMIN: ADMIN
BEGIN_RCLM_DATE:
END_RCLM_DATE:
VOL_ENCR_KEYMGR:


------------------------------------------------------------------------

post results of this command

select * from volumes where volume_name='/opt/tivoli/tsm/stg01/0000cdfc.bfs'

The part in quotes is case sensitive, so if it displays in lower case in a q vol, then it should stay in lower case for the select statement.
 
I have deleted all drives and paths to drives and library and have re-created them with the following : Hope all looks ok. My device type is LTO and not Generic which seems to throw out these problems.


tsm: TSM_PROD01>DEFINE PATH TSM_PROD01 DRIVE1 srctype=server desttype=drive device=/dev/IBMtape0 library=ts3100

ANR8955I Drive DRIVE1 in library TS3100 with serial number is updated with the newly discovered serial number 1K10002024 .
ANR1720I A path from TSM_PROD01 to TS3100 DRIVE1 has been defined.

tsm: TSM_PROD01>DEFINE PATH TSM_PROD01 DRIVe2 srctype=server desttype=drive device=/dev/IBMtape1 library=ts3100

ANR8955I Drive DRIVE2 in library TS3100 with serial number is updated with the newly discovered serial number 1K10002592 .
ANR1720I A path from TSM_PROD01 to TS3100 DRIVE2 has been defined.

tsm: TSM_PROD01>q drive
ANR2017I Administrator ADMIN issued command: QUERY DRIVE

Library Name Drive Name Device Type On-Line
------------ ------------ ----------- -------------------
TS3100 DRIVE1 LTO Yes
TS3100 DRIVE2 LTO Yes

tsm: TSM_PROD01>q path
ANR2017I Administrator ADMIN issued command: QUERY PATH

Source Name Source Type Destination Destination On-Line
Name Type
----------- ----------- ----------- ----------- -------
TSM_PROD01 SERVER TS3100 LIBRARY Yes
TSM_PROD01 SERVER DRIVE1 DRIVE Yes
TSM_PROD01 SERVER DRIVE2 DRIVE Yes
 
Auditing these volumes now - seems to be working ok.

But activity log reports a failure

01/06/2010 10:56:52 ANR2313I Audit Volume (Inspect Only) process started for
volume /opt/tivoli/tsm/stg01/0000CAD2.BFS (process ID
14). (SESSION: 15, PROCESS: 14)
01/06/2010 10:56:53 ANR2336W Audit Volume terminated for volume
/opt/tivoli/tsm/stg01/0000CAD2.BFS - insufficient number
of mount points available for removable media. (SESSION:
15, PROCESS: 14)
01/06/2010 10:56:53 ANR0985I Process 14 for AUDIT VOLUME (INSPECT ONLY)
running in the BACKGROUND completed with completion state
FAILURE at 10:56:53 AM. (SESSION: 15, PROCESS: 14)




tsm: TSM_PROD01>audit vol /opt/tivoli/tsm/stg01/0000CAD2.BFS
ANR2017I Administrator ADMIN issued command: AUDIT VOLUME /opt/tivoli/tsm/stg01/0000CAD2.BFS
ANR2310W This command will compare all inventory references to volume /opt/tivoli/tsm/stg01/0000CAD2.BFS with the actual data stored on the volume and will
report any discrepancies; the data will be inaccessible to users until the operation completes.

Do you wish to proceed? (Yes (Y)/No (N)) y
ANR2017I Administrator ADMIN issued command: AUDIT VOLUME /opt/tivoli/tsm/stg01/0000CAD2.BFS
ANR1199I Removable volume /opt/tivoli/tsm/stg01/0000CACF.BFS is required for audit process.
ANR1199I Removable volume /opt/tivoli/tsm/stg01/0000CAD0.BFS is required for audit process.
ANR1199I Removable volume /opt/tivoli/tsm/stg01/0000CAD1.BFS is required for audit process.
ANR1199I Removable volume /opt/tivoli/tsm/stg01/0000CAD2.BFS is required for audit process.
ANR1199I Removable volume /opt/tivoli/tsm/stg01/0000CAD3.BFS is required for audit process.
ANR1199I Removable volume /opt/tivoli/tsm/stg01/0000CAD4.BFS is required for audit process.
ANR1199I Removable volume /opt/tivoli/tsm/stg01/0000CAD5.BFS is required for audit process.
ANR1199I Removable volume /opt/tivoli/tsm/stg01/0000CAD6.BFS is required for audit process.
ANR1199I Removable volume /opt/tivoli/tsm/stg01/0000CAD7.BFS is required for audit process.
ANR1199I Removable volume /opt/tivoli/tsm/stg01/0000CAD8.BFS is required for audit process.
ANR1199I Removable volume /opt/tivoli/tsm/stg01/0000CAD9.BFS is required for audit process.
ANR1199I Removable volume /opt/tivoli/tsm/stg01/0000CADA.BFS is required for audit process.
ANR1199I Removable volume /opt/tivoli/tsm/stg01/0000CADB.BFS is required for audit process.
ANR1199I Removable volume /opt/tivoli/tsm/stg01/0000CADC.BFS is required for audit process.
ANR1199I Removable volume /opt/tivoli/tsm/stg01/0000CADD.BFS is required for audit process.
ANR1199I Removable volume /opt/tivoli/tsm/stg01/0000CADE.BFS is required for audit process.
ANR1199I Removable volume /opt/tivoli/tsm/stg01/0000CADF.BFS is required for audit process.
ANR1199I Removable volume /opt/tivoli/tsm/stg01/0000CAE0.BFS is required for audit process.
ANR1199I Removable volume /opt/tivoli/tsm/stg01/0000CAE1.BFS is required for audit process.
ANR1199I Removable volume /opt/tivoli/tsm/stg01/0000CAE2.BFS is required for audit process.
ANR1199I Removable volume /opt/tivoli/tsm/stg01/0000CAE3.BFS is required for audit process.
ANR1199I Removable volume /opt/tivoli/tsm/stg01/0000CAE4.BFS is required for audit process.
ANR0984I Process 14 for AUDIT VOLUME (INSPECT ONLY) started in the BACKGROUND at 10:56:52 AM.
ANR2313I Audit Volume (Inspect Only) process started for volume /opt/tivoli/tsm/stg01/0000CAD2.BFS (process ID 14).
ANS8003I Process number 14 started.

tsm: TSM_PROD01>ANR2336W Audit Volume terminated for volume /opt/tivoli/tsm/stg01/0000CAD2.BFS - insufficient number of mount points available for removable media.
ANR0985I Process 14 for AUDIT VOLUME (INSPECT ONLY) running in the BACKGROUND completed with completion state FAILURE at 10:56:53 AM.

 
IBM suggests the following - But how do I make more moiunt points available on the disks ??


ANR2336W: Audit Volume terminated for volume volume name - insufficient number of mount points available for removable media.

Explanation

During Audit Volume for the indicated volume, the server could not allocate sufficient mount points for the volume required.

System action

Audit Volume stops.

User response

If necessary, make more mount points available.


Parent topic: Server ANR messages list





 
It would be my pleasure.

tsm: TSM_PROD01>q dev filedevc f=d
Device Class Name: FILEDEVC
Device Access Strategy: Sequential
Storage Pool Count: 1
Device Type: FILE
Format: DRIVE
Est/Max Capacity (MB): 2,048.0
Mount Limit: 60 <--- The only thing I changed, but have since changed back
Mount Wait (min):
Mount Retention (min):
Label Prefix:
Library: TS3100
Directory: /opt/tivoli/tsm/stg01
Server Name:
Retry Period:
Retry Interval:
Shared:
High-level Address:
Minimum Capacity:
WORM: No
Drive Encryption:
Scaled Capacity:
Last Update by (administrator): ADMIN
Last Update Date/Time: 01/06/2010 09:40:52
 
here's what bothers me:

Library: TS3100

I have a file based sequential volume device class as well, but no library associated with the device class.

I'm also gathering that there really are mount points being taken which should be referenced when you perform a 'q mount', like:

tsm: TSM1>q mou
ANR8333I FILE volume F:\0F01A is mounted R/W, status: IN USE.
ANR8330I LTO volume D00122L3 is mounted R/O in drive DRIVE3 (mt2.0.0.5),
status: IN USE.
ANR8330I LTO volume D00125L3 is mounted R/W in drive DRIVE2 (mt1.0.0.4),
status: IN USE.
ANR8334I 3 matches found.


I just have one seq. file volume mounted above...do you really have 60/80?

Here's my seq. file based device type for your reference:

tsm: TSM1>q dev onlinefile f=d

Device Class Name: ONLINEFILE
Device Access Strategy: Sequential
Storage Pool Count: 1
Device Type: FILE
Format: DRIVE
Est/Max Capacity (MB): 5,120.0
Mount Limit: 80
Mount Wait (min):
Mount Retention (min):
Label Prefix:
Drive Letter:
Library:
Directory: F:\,G:\,H:\
Server Name:
Retry Period:
Retry Interval:
Twosided:
Shared:
High-level Address:
Minimum Capacity:
WORM: No
Drive Encryption:
Scaled Capacity:
Last Update by (administrator): ADMIN
Last Update Date/Time: 01/18/2007 18:35:25
 
hmm so you don't think I need the = Library: TS3100.

The devclass for my sequential and yours are quite simple really, yours looks like you mount 5gb files, mine mounts 2gb files and a directory structure to store them in.

(default for FILE devclass I believe is 2GB - upon just reading docs on it)

I am considering removing the TS3100 - but this is the library name - so doesn't look out of place there.

But I see you don't have a value for that one.


What is your tape library ?


ok I have removed that entry and updated the mountl back to 80.


tsm: TSM_PROD01>q dev filedevc f=d
Device Class Name: FILEDEVC
Device Access Strategy: Sequential
Storage Pool Count: 1
Device Type: FILE
Format: DRIVE
Est/Max Capacity (MB): 2,048.0
Mount Limit: 80
Mount Wait (min):
Mount Retention (min):
Label Prefix:
Library:
Directory: /opt/tivoli/tsm/stg01
Server Name:
Retry Period:
Retry Interval:
Shared:
High-level Address:
Minimum Capacity:
WORM: No
Drive Encryption:
Scaled Capacity:
Last Update by (administrator): ADMIN
Last Update Date/Time: 01/06/2010 14:26:34
 
If filedevc is a sequential file base dev class, I don't see why it would need a library associated with it, particularly one that already exists as an LTO based library (if I'm correct).

Also, are there numerous mount points when you perform a 'q mount' ?
 
If filedevc is a sequential file base dev class, I don't see why it would need a library associated with it, particularly one that already exists as an LTO based library (if I'm correct).

Also, are there numerous mount points when you perform a 'q mount' ?



No Nothing is mounted.

tsm: TSM_PROD01>q mount
ANR2034E QUERY MOUNT: No match found using this criteria.
 
A few questions and a thought here.

1. Can you confirm how many sessions are running with "q sess".
2. Post the output of "q mount", "show asqueued"
3. Can you post the output of "df -k /opt/tivoli/tsm/stg01" (unix command)
4. Post the output of "ulimit -a" (unix command)
5. Can you confirm the destination of all your management classes points to the filepool and not to the diskpool (which has no space allocated to it). (I don't think this is the cause but better to check).

Now the thought...
When you audited that volume you got mount requests for a *lot* of other volumes. This is probably because you have used the default 2GB size for that device class instead of choosing a more appropriate larger size (eg. 100GB). There is probably a single large file (or more) spanned across these volumes, and it will need to mount them all for the audit, maybe it is trying to mount them all at the same time? Possibly increasing the mount limit to > the number of volumes in the filepool will let you access them all, in order for you to migrate to a larger maxsize for the volumes if this is what the issue is.
 
If filedevc is a sequential file base dev class, I don't see why it would need a library associated with it, particularly one that already exists as an LTO based library (if I'm correct).

You are correct, the library should not be in the devclass for a file pool, that devclass has nothing to do with the library. Not sure if it would cause an issue (may be silently ignored), but correct it shouldn't be there.

Can you try the audit vol that failed earlier again now that you've changed it?
 
A few questions and a thought here.

1. Can you confirm how many sessions are running with "q sess".

Only my session is running
tsm: TSM_PROD01>q sess
------ ------ ------ ------ ------- ------- ----- -------- --------------------
25 Tcp/Ip Run 0 S 203 196 Admin Linux86 ADMIN


2. Post the output of "q mount", "show asqueued"

No Mounts
q mount
ANR2034E QUERY MOUNT: No match found using this criteria.


3. Can you post the output of "df -k /opt/tivoli/tsm/stg01" (unix command)
df -h /opt/tivoli/tsm/stg01
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VG_DATA-LV_STG01
670G 31G 605G 5% /opt/tivoli/tsm/stg01


4. Post the output of "ulimit -a" (unix command)
[root@mp-man-l-p1 Colin_Dont_delete]# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 38911
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 38911
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited


5. Can you confirm the destination of all your management classes points to the filepool and not to the diskpool (which has no space allocated to it). (I don't think this is the cause but better to check).

Yep they all seem to pont to the filepool


Policy Domain Name: PROD
Policy Set Name: ACTIVE
Mgmt Class Name: P-MC-MT
Default Mgmt Class ?: No
Description: Production MC Monthly
Space Management Technique: None
Auto-Migrate on Non-Use: 0
Migration Requires Backup?: Yes
Migration Destination: FILE_STG01
Last Update by (administrator): ADMIN
Last Update Date/Time: 09/17/2008 17:57:47
Managing profile:
Changes Pending: No

Policy Domain Name: PROD
Policy Set Name: ACTIVE
Mgmt Class Name: P-MC-WK
Default Mgmt Class ?: Yes
Description: Production MC Weekly
Space Management Technique: None
Auto-Migrate on Non-Use: 0
Migration Requires Backup?: Yes
Migration Destination: FILE_STG01
Last Update by (administrator): ADMIN
Last Update Date/Time: 09/17/2008 17:49:26
Managing profile:
Changes Pending: No

Policy Domain Name: PROD
Policy Set Name: PRODUCTION
Mgmt Class Name: P-MC-MT
Default Mgmt Class ?: No
Description: Production MC Monthly
Space Management Technique: None
Auto-Migrate on Non-Use: 0
more... (<ENTER> to continue, 'C' to cancel)

Migration Requires Backup?: Yes
Migration Destination: FILE_STG01
Last Update by (administrator): ADMIN
Last Update Date/Time: 09/17/2008 17:57:47
Managing profile:
Changes Pending: No

Policy Domain Name: PROD
Policy Set Name: PRODUCTION
Mgmt Class Name: P-MC-WK
Default Mgmt Class ?: Yes
Description: Production MC Weekly
Space Management Technique: None
Auto-Migrate on Non-Use: 0
Migration Requires Backup?: Yes
Migration Destination: FILE_STG01
Last Update by (administrator): ADMIN
Last Update Date/Time: 09/17/2008 17:49:26
Managing profile:
Changes Pending: No



Now the thought...
When you audited that volume you got mount requests for a *lot* of other volumes. This is probably because you have used the default 2GB size for that device class instead of choosing a more appropriate larger size (eg. 100GB). There is probably a single large file (or more) spanned across these volumes, and it will need to mount them all for the audit, maybe it is trying to mount them all at the same time? Possibly increasing the mount limit to > the number of volumes in the filepool will let you access them all, in order for you to migrate to a larger maxsize for the volumes if this is what the issue is.

Yeah that makes sense for the multiple audit and mounts Coz I do backup some rather large files. Some are 100++ GB

Hey Unix commands, yep know them well.
 
By the way, the output you posted doesn't show the destination for backup data. You need to run "q copy f=d" and look at the "copy destination" field.

What you have there is used for HSM type stuff, not backups.
 
By the way, the output you posted doesn't show the destination for backup data. You need to run "q copy f=d" and look at the "copy destination" field.

What you have there is used for HSM type stuff, not backups.



Ahh ok my mis-understanding. All copypools point to FILE_STG01


tsm: TSM_PROD01>q copy f=d
Policy Domain Name: PROD
Policy Set Name: ACTIVE
Mgmt Class Name: P-MC-MT
Copy Group Name: STANDARD
Copy Group Type: Backup
Versions Data Exists: 4,380
Versions Data Deleted: 4,380
Retain Extra Versions: 4,380
Retain Only Version: 4,380
Copy Mode: Modified
Copy Serialization: Shared Static
Copy Frequency: 0
Copy Destination: FILE_STG01
Table of Contents (TOC) Destination:
Last Update by (administrator): ADMIN
Last Update Date/Time: 09/17/2008 18:01:14
Managing profile:
Changes Pending: No

Policy Domain Name: PROD
Policy Set Name: ACTIVE
Mgmt Class Name: P-MC-WK
Copy Group Name: STANDARD
Copy Group Type: Backup
Versions Data Exists: 28
Versions Data Deleted: 28
Retain Extra Versions: 28
Retain Only Version: 70
Copy Mode: Modified
Copy Serialization: Shared Static
Copy Frequency: 0
Copy Destination: FILE_STG01
Table of Contents (TOC) Destination:
Last Update by (administrator): ADMIN
Last Update Date/Time: 09/17/2008 17:51:07
more... (<ENTER> to continue, 'C' to cancel)

Managing profile:
Changes Pending: No

Policy Domain Name: PROD
Policy Set Name: PRODUCTION
Mgmt Class Name: P-MC-MT
Copy Group Name: STANDARD
Copy Group Type: Backup
Versions Data Exists: 4,380
Versions Data Deleted: 4,380
Retain Extra Versions: 4,380
Retain Only Version: 4,380
Copy Mode: Modified
Copy Serialization: Shared Static
Copy Frequency: 0
Copy Destination: FILE_STG01
Table of Contents (TOC) Destination:
Last Update by (administrator): ADMIN
Last Update Date/Time: 09/17/2008 18:01:14
Managing profile:
Changes Pending: No

Policy Domain Name: PROD
Policy Set Name: PRODUCTION
Mgmt Class Name: P-MC-WK
Copy Group Name: STANDARD
Copy Group Type: Backup
Versions Data Exists: 28
Versions Data Deleted: 28
Retain Extra Versions: 28
Retain Only Version: 70
Copy Mode: Modified
Copy Serialization: Shared Static
Copy Frequency: 0
Copy Destination: FILE_STG01
Table of Contents (TOC) Destination:
more... (<ENTER> to continue, 'C' to cancel)

Last Update by (administrator): ADMIN
Last Update Date/Time: 09/17/2008 17:51:07
Managing profile:
Changes Pending: No

 
If those are your only copygroups then things look set up properly there. Can we see a status summary of volumes in file_stg01? Post results of command:

select status,count(status) as total from volumes where stgpool_name='FILE_STG01' group by status

I'm also curious to see what the activity log looks like regarding migration processes. The output may be large, so we should redirect to file and attach instead of just posting text...assuming you are running admin command line from a windows host, run

q ac begind=-1 search=migration > C:\filedump.txt

On *nix, I suppose the redirect would be to ~/filedump.txt or something...

Finally, are you on a 5.x or a 6.1 version of tsm server? I don't remember seeing reference to that...
 
If those are your only copygroups then things look set up properly there. Can we see a status summary of volumes in file_stg01? Post results of command:

select status,count(status) as total from volumes where stgpool_name='FILE_STG01' group by status

I'm also curious to see what the activity log looks like regarding migration processes. The output may be large, so we should redirect to file and attach instead of just posting text...assuming you are running admin command line from a windows host, run

q ac begind=-1 search=migration > C:\filedump.txt

On *nix, I suppose the redirect would be to ~/filedump.txt or something...

Finally, are you on a 5.x or a 6.1 version of tsm server? I don't remember seeing reference to that...

I am running TSM 5.5

Here is my status summary - Nice SQL command BTW.

tsm: TSM_PROD01>select status,count(status) as total from volumes where stgpool_name='FILE_STG 01' group by status
STATUS TOTAL
------------------ -----------
EMPTY 2
FILLING 2
FULL 36


Migration is working as per normal now. Snippet below from the logs.

ANR1341I Scratch volume /opt/tivoli/tsm/stg01/0000CE2F.BFS has been deleted from storage pool FILE_STG01.
ANR1341I Scratch volume /opt/tivoli/tsm/stg01/0000CE30.BFS has been deleted from storage pool FILE_STG01.
ANR1341I Scratch volume /opt/tivoli/tsm/stg01/0000CE31.BFS has been deleted from storage pool FILE_STG01.
ANR0515I Process 69 closed volume /opt/tivoli/tsm/stg01/0000CE32.BFS.
ANR8340I FILE volume /opt/tivoli/tsm/stg01/0000CE33.BFS mounted.
ANR0512I Process 69 opened input volume /opt/tivoli/tsm/stg01/0000CE33.BFS.
ANR1101I Migration ended for volume /opt/tivoli/tsm/stg01/0000CE32.BFS.
ANR1100I Migration started for volume /opt/tivoli/tsm/stg01/0000CE33.BFS, storage pool FILE_STG01 (process number 69).
ANR1102I Removable volume /opt/tivoli/tsm/stg01/0000CE33.BFS is required for migration.
ANR1102I Removable volume /opt/tivoli/tsm/stg01/0000CE34.BFS is required for migration.
ANR0515I Process 69 closed volume /opt/tivoli/tsm/stg01/0000CE33.BFS.
ANR8340I FILE volume /opt/tivoli/tsm/stg01/0000CE34.BFS mounted.
ANR0512I Process 69 opened input volume /opt/tivoli/tsm/stg01/0000CE34.BFS.
ANR1341I Scratch volume /opt/tivoli/tsm/stg01/0000CE32.BFS has been deleted from storage pool FILE_STG01.

Running the SQL again, they are droping off. So all looks good again. Phheeww



------------------ -----------
EMPTY 3
FILLING 1
FULL 17
 
Last edited:
Looks good. I noticed from another thread what seemed to be the turning point

It seems to be moving forward after this change

UPDATE DEVCLASS filedevc library=

I really do think that the reference to an actual library was confusing it, and maybe it was mixing the moint point limits with the lto dev class... who knows. Apparently after updating your dev class and restarting your server service (daemon), migration is progressing normally as mount points are allocated correctly.

I thought that when you posted the 'q dev file_stg01 f=d' output that the library reference was a sore thumb.

Glad you got things going again.
 
Back
Top