Some nodes seems to have vanished.

Deviouz

Active Newcomer
Joined
Jun 6, 2011
Messages
15
Reaction score
0
Points
0
PREDATAR Control23

When I do q node *, the node I am looking for is no longer in the table. How can I ensure that I have connectivity to the nodes if I am only using the tsm cli?

We noticed this problem when we were about to run our differential backups. we tried to run the scripts associated with this but only got back this error message:


ANR2519E: The file space, fs name, does not exist on the NAS device associated with the node node name.

Also it should be noted that we CAN see those nodes in the TIP-gui so we are a little confused as to what is going on. Any thoughts on this would be much appreciated.

If I have provided to little info, please let me know and I will try to fill in the blanks.

Thanks.
 
PREDATAR Control23

Hi,

q node * TYPE=NAS
NAS nodes are not displayed by default (using the "q node *" command)

How does your script look like, what do you want to achieve, what NAS device do you have etc.

Harry
 
PREDATAR Control23

q script titan_differential_00 f=r

/* DIFFERENTIAL 00 */
backup node bluearctitan3 /[name] MGmtclass=NDMP_DIFFERENTIAL mode=DIFFerential toc=yes wait=yes
backup node bluearctitan3 /[name] MGmtclass=NDMP_DIFFERENTIAL mode=DIFFerential toc=yes wait=yes
backup node bluearctitan3 /[name] MGmtclass=NDMP_DIFFERENTIAL mode=DIFFerential toc=yes wait=yes
backup node bluearctitan3 /[name] MGmtclass=NDMP_DIFFERENTIAL mode=DIFFerential toc=yes wait=yes
backup node bluearctitan3 /[name] MGmtclass=NDMP_DIFFERENTIAL mode=DIFFerential toc=yes wait=yes


We are trying to do differential backups of these nodes and are getting the errors I stated earlier.

The NAS device we are using is Bluearc, the Titan series.

Note: /[name] is the different volume names wich I could not supply because of our sercurity policy.

Oh, and I tried the command and it did show the nodes that was/is missing but the backups is still not working.
 
Last edited:
PREDATAR Control23

Hi,

do not know the Bluearc devices so I cannot guide you there but these are general steps you have to check.
a) be sure you are trying to connect to the right filer - check the corresponding datamover ("q datamover bluearctitan3 type=nas f=d" - check the IP and the username - does the user have sufficient access rights?)
b) check if the /[name] filesystem exists on the device
c) if it does not exist, it may still be correct definition defined by virtualfsmapping - check "query virtualfsmapping bluearctitan3 /[name]" to see what is defined under this name - check if it does exist on the filer

Tell us what you have found ...

Harry
 
PREDATAR Control23

ok here goes:

a) I found this;

q datamover bluearctitan3 type=nas f=d

Data Mover Name Data Mover IP Address TCP/IP Port User Name Storage On-Line Last Update by (administrator) Last Update Date/Time
Type Number Pool Data
Format
------------------------------- ---------- ------------------------------- ----------- ------------------------------- ---------- ------- ------------------------------- -------------------------------
BLUEARCTITAN3 NAS xxx.xxx.xxx.xxx 10000 NDMP NDMP Dump Yes CCAMA 07/08/10 17:26:47

...that user has the appropriate privileges.

b) I assume you are talking about the Bluearc titan filespace, /[name], and sure enough all of them are there.

c) ...and here is what I found regarding point c);
Code:
     query virtualfsmapping bluearctitan3 /[name]

Node Name           Virtual Filespace         Filespace       Path                          Hexadecimal
                    Mapping Name              Name                                          Path?      
---------------     ---------------------     -----------     -------------------------     -----------
BLUEARCTITAN3       /[name]            /__VOLUME_-          /[name]                No         
                                                   _/Windows-                                              
                                                     Storage

Note that the virtual filespace mapping and the path /[name] are the same.
 
Last edited by a moderator:
PREDATAR Control23

Hi,
I edited your post to have a better format. So - seems the BlueArc FULL PATH to the volume you are trying to backup is /__VOLUME__/WindowsStorage/[name] - right? Does that exist? Can you show us any output from the BlueArc? (so any experienced BlueArc admin can tell us if it is OK or not)
You are storing this "folder" in TSM as filespace /[name] ...
Do you have a problem with differentials only or with fulls as well?

Harry
 
PREDATAR Control23

Thanks Harry,

Ok, so I tried the FULL backups and they seem to be working just fine. We also tried to reboot the AIX machine and that didn't change anything either.Pinging the bluearctitan1, 2 and 3 works from the AIX box (both name and IP) at this point I agree that I could very possibly be a faulty mount path. We have placed a support call to IBM and are currntly waiting to hear back from them.

I will continue to monitor this thread if you have any more thoughts or ideas and also in the unlikley event we are able to fix it by our selves, I will try to explain what we did or did not do to make it work.
 
PREDATAR Control23

We fixed the paths to bluearc. Now we are getting a different error.

"ANR0984I Process 120 for BACKUP NAS (DIFFERENTIAL) started in the FOREGROUND at 21:57:10.
ANR1064I Differential backup of NAS node BLUEARCTITAN3, file system /xyz, started as process 120 by administrator ADMIN.
ANR1069E NAS Backup process 120 terminated - insufficient number of mount points available for removable media.
ANR0985I Process 120 for BACKUP NAS (DIFFERENTIAL) running in the FOREGROUND completed with completion state FAILURE at 21:57:10.
ANR1762E BACKUP NODE: Command failed for node BLUEARCTITAN3, filespace /xyz - mount point unavailable."

There are scratch tapes available.
The (dev/rmt0-9)drives are mounted correctly.
The paths looks fine.
The tape format is right.

What is up with this thing?
 
PREDATAR Control23

Hi,

this happens when there is a problem with the management class used pointing to the device which
a) is offline (drives, paths)
b) has paths defined incorrectly (or at all)

Show us the exact command you are running and output of the following commands:
q devc <XYZ> f=d
q libr <ABC> f=d
q path f=d
q dri f=d
q act begint=<command start> endt=<failure_time>

Harry
 
PREDATAR Control23

If you continue to have problem, another solution is to mount the NAS file systems onto another server that contains TSM client and back up the file systems (or mount onto the TSM server and setup cron job?).


Mike
 
PREDATAR Control23

Ok, I attached a txt-file with the output you requested, keep in mind that /xyz are something entirely different, although the same throughout the file they are different in our system.

View attachment output.txt
 
PREDATAR Control23

Hi,

seems you are doing NDMP backups over LAN (filer-to-server) variant as there is no path defined between the datamover and the drives (all paths have "source type = server") - is this correct?
Now I need to see the backup copygroup definition for the NDMP_DIFFERENTIAL management class. The only interesting info there is the destination stgpool. Show us the "q stg XYZ f=d" and "q devc ABC f=d" (for the corresponding device class).

Harry
 
PREDATAR Control23

Ok, here it is.

Code:
tsm: TSM1>q stg ndmp_backup_pool f=d

                    Storage Pool Name: NDMP_BACKUP_POOL
                    Storage Pool Type: Primary
                    Device Class Name: PRIMLTO
                   Estimated Capacity: 339,462 G
                   Space Trigger Util: 
                             Pct Util: 81.5
                             Pct Migr: 82.4
                          Pct Logical: 100.0
                         High Mig Pct: 90
                          Low Mig Pct: 70
                      Migration Delay: 0
                   Migration Continue: Yes
                  Migration Processes: 1
                Reclamation Processes: 1
                    Next Storage Pool: 
                 Reclaim Storage Pool: 
               Maximum Size Threshold: No Limit
                               Access: Read/Write
                          Description: Storage pool for BLUEARCTITAN1_HILLSIDE NAS file server.
                    Overflow Location: 
                Cache Migrated Files?: 
                           Collocate?: Group
                Reclamation Threshold: 60
            Offsite Reclamation Limit: 
      Maximum Scratch Volumes Allowed: 10
       Number of Scratch Volumes Used: 3
        Delay Period for Volume Reuse: 0 Day(s)
               Migration in Progress?: No
                 Amount Migrated (MB): 0.00
     Elapsed Migration Time (seconds): 0
             Reclamation in Progress?: No
       Last Update by (administrator): ADMIN
                Last Update Date/Time: 06/15/11   21:30:08
             Storage Pool Data Format: Native
                 Copy Storage Pool(s): 
                  Active Data Pool(s): 
              Continue Copy on Error?: Yes
                             CRC Data: No
                     Reclamation Type: Threshold
          Overwrite Data when Deleted: 
                    Deduplicate Data?: No
 Processes For Identifying Duplicates: 
            Duplicate Data Not Stored: 
                       Auto-copy Mode: Client
Contains Data Deduplicated by Client?: No


tsm: TSM1>q devc PRIMLTO f=d

             Device Class Name: PRIMLTO
        Device Access Strategy: Sequential
            Storage Pool Count: 6
                   Device Type: LTO
                        Format: DRIVE
         Est/Max Capacity (MB): 
                   Mount Limit: 10
              Mount Wait (min): 60
         Mount Retention (min): 10
                  Label Prefix: ADSM
                       Library: TSMMANAGER
                     Directory: 
                   Server Name: 
                  Retry Period: 
                Retry Interval: 
                        Shared: 
            High-level Address: 
              Minimum Capacity: 
                          WORM: No
              Drive Encryption: Off
               Scaled Capacity: 
Last Update by (administrator): ADMIN
         Last Update Date/Time: 06/15/11   23:00:22
 
PREDATAR Control23

Hi,

need to see the copygroup definition - what is the value of "tocdestination" parameter? Show me the stgpool and deviceclass info for this. What about running the backup without TOC? Does it work?

Harry
 
PREDATAR Control23

Code:
tsm: TSM1>q copygr NDMP active NDMP_DIFFERENTIAL f=d

                 Policy Domain Name: NDMP
                    Policy Set Name: ACTIVE
                    Mgmt Class Name: NDMP_DIFFERENTIAL
                    Copy Group Name: STANDARD
                    Copy Group Type: Backup
               Versions Data Exists: No Limit
              Versions Data Deleted: No Limit
              Retain Extra Versions: 185
                Retain Only Version: 185
                          Copy Mode: Modified
                 Copy Serialization: Shared Static
                     Copy Frequency: 0
                   Copy Destination: NDMP_PRIM_TAPE
Table of Contents (TOC) Destination: NDMPTOC_DISK_POOL
     Last Update by (administrator): ADMIN
              Last Update Date/Time: 08/06/10   11:41:51
                   Managing profile: 
                    Changes Pending: No


tsm: TSM1>q devc ndmplto f=d

             Device Class Name: NDMPLTO
        Device Access Strategy: Sequential
            Storage Pool Count: 1
                   Device Type: NAS
                        Format: DRIVE
         Est/Max Capacity (MB): 409,600.0
                   Mount Limit: 10
              Mount Wait (min): 60
         Mount Retention (min): 0
                  Label Prefix: ADSM
                       Library: TSMMANAGER
                     Directory: 
                   Server Name: 
                  Retry Period: 
                Retry Interval: 
                        Shared: 
            High-level Address: 
              Minimum Capacity: 
                          WORM: No
              Drive Encryption: 
               Scaled Capacity: 
Last Update by (administrator): ADMIN
         Last Update Date/Time: 06/15/11   23:00:03


tsm: TSM1>q devc primlto f=d

             Device Class Name: PRIMLTO
        Device Access Strategy: Sequential
            Storage Pool Count: 6
                   Device Type: LTO
                        Format: DRIVE
         Est/Max Capacity (MB): 
                   Mount Limit: 10
              Mount Wait (min): 60
         Mount Retention (min): 10
                  Label Prefix: ADSM
                       Library: TSMMANAGER
                     Directory: 
                   Server Name: 
                  Retry Period: 
                Retry Interval: 
                        Shared: 
            High-level Address: 
              Minimum Capacity: 
                          WORM: No
              Drive Encryption: Off
               Scaled Capacity: 
Last Update by (administrator): ADMIN
         Last Update Date/Time: 06/16/11   16:11:39
 
PREDATAR Control23

Hi,

and the "NDMPTOC_DISK_POOL" ? What about the backup without TOC? What is the NDMPLTO device class used for? What is the storage pool defined using this device class? How does the management class used for FULL backup look like (copygroup destination, TOC destination, etc.) ?

Harry
 
PREDATAR Control23

Well this is embarrassing, It seems that we forgot to "activate" changes since the other TSM guy quit his job here. The diffs are working now and we will try to, finally, follow our intended schedule for fulls tomorrow.

I just wanted thank you for the tidbits and support you have posted here, I will most certainly come back to this forum again for more help and I hope that I will be received in the same manner as in this thread. Alternatively, I will be back tomorrow because the full backups did not work but I am hopeful that they will.

Thanks again

Cheers!

/Johnny <--n00b@tsm
 
Top