ADSM-L

Re: Migration process using more than 1 tape drive eventhough mig proc =1

2000-11-06 01:27:24
Subject: Re: Migration process using more than 1 tape drive eventhough mig proc =1
From: "Shalev, Jonathan" <jonathan.shalev AT INTEL DOT COM>
Date: Sun, 5 Nov 2000 22:26:48 -0800
When the disk pool is full, a session can/will write directly to a tape in
the "Next Storage Pool" for the disk pool.
Run "q session f=d" and you will see which session is using the tape.

Jonathan  Shalev
IDC Computing / Systems Engineering
UNIX Server Platforms
Intel Israel (74) Ltd.
Jonathan.Shalev AT intel DOT com
Phone: +972-4-865-6588, Fax: +972-4-865-5999


> -----Original Message-----
> From: Pietro Brenni [mailto:pietrob AT AU1.IBM DOT COM]
> Sent: Monday, November 06, 2000 6:20 AM
> To: ADSM-L AT VM.MARIST DOT EDU
> Subject: Migration process using more than 1 tape drive
> eventhough migproc =1
>
>
> This problem is very particular
>
> I have occurances where a diskpool becomes full , or exceeds the Hi
> threshold and a migration process starts.  Issuing a tape mount.
> About 3 mins later in the activity log another tape is
> mounted for the same
> tapepool.
>  q proc  shows only 1 tape mounted for the migration process
> q mount shows 2 tapes mounted and INUSE
>  q vol f=d shows tapes are being updated
> changing one of the tapes to read-only sets its status to
> IDLE ( in q mount
> ), produces an error  for a session from a node ???(Why as
> all backups go
> directly to disk)
> changing the tape back to readw , migration process continues
> to use the
> tape.
> I tried this again, this time mounted a new scratch tape ,
> but still this
> is not showing in q proc for the migration process
>
>
> The storage pool has these settings
>
>                Storage Pool Name: SP_DISK_POOL
>                Storage Pool Type: Primary
>                Device Class Name: DISK
>          Estimated Capacity (MB): 4,000.0
>                         Pct Util: 100.0
>                         Pct Migr: 100.0
>                      Pct Logical: 100.0
>                     High Mig Pct: 90
>                      Low Mig Pct: 85
>                  Migration Delay: 0
>               Migration Continue: Yes
>              Migration Processes: 1
>                Next Storage Pool: SP_TAPE_POOL
>             Reclaim Storage Pool:
>           Maximum Size Threshold: No Limit
>                           Access: Read/Write
>                      Description: SP Disk Pool for Filesystem Backups
>                Overflow Location:
>            Cache Migrated Files?: No
>                       Collocate?:
>            Reclamation Threshold:
>  Maximum Scratch Volumes Allowed:
>    Delay Period for Volume Reuse:
>           Migration in Progress?: Yes
>             Amount Migrated (MB): 437.13
> Elapsed Migration Time (seconds): 453
>         Reclamation in Progress?:
>  Volume Being Migrated/Reclaimed:
>   Last Update by (administrator): ADMIN
>            Last Update Date/Time: 06-11-2000 13:13:38
>
> Also dev mount retention set to 1 min
>
>
> Now this is a problem as I'm using 3570 and only have 19
> slots. The above
> problem causes 2 tapes to be partially  filled wasting 1 valuble slot
> TSM = 3.7.3.8
> TSM client= 3.7.2.15
>
> Is there somthing about the migration process that isn't
> documented or is
> it a bug
>
>
>  Regards Ped
>
> ( Pietro M D  Brenni )
> IBM Global Services Australia
> ZE06 (Zenith Centre - Tower A)
> Level 6,
> 821 Pacific Highway
> Chatswood NSW 2067
> Sydney Australia
> Ph:  +61-2-8448 4788
> Fax: +61-2-8448 4006
>
<Prev in Thread] Current Thread [Next in Thread>