ADSM-L

Re: [ADSM-L] Odd Migration Behaviour

2013-02-14 19:32:20
Subject: Re: [ADSM-L] Odd Migration Behaviour
From: "Gee, Norman" <Norman.Gee AT LC.CA DOT GOV>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Fri, 15 Feb 2013 00:30:45 +0000
What about placing this problem node into its own collocation group. Everything 
else should migrate.

-----Original Message-----
From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf Of 
white jeff
Sent: Thursday, February 14, 2013 12:35 PM
To: ADSM-L AT VM.MARIST DOT EDU
Subject: Re: Odd Migration Behaviour

Hi

Thanks for the response. There was a backup in progress, but it had only
backed up 7gb. There was about 1.5tb in the pool. The other clients had
finished backing up several hours earlier. Migs ran as expected, but
completed, leaving 1.5tb in the pool

I will find out tomorrow morning when i get back to the client site what
thye %migr and %util values are.

Thanks again

On 14 February 2013 17:55, Prather, Wanda <Wanda.Prather AT icfi DOT com> wrote:

> Do Q STGPOOL and look for %MIGR, "per cent migratable".
>
> If %MIGR is less than %UTIL, then there are chunks in the pool that aren't
> eligible and can't be migrated out yet, because there are transactions in
> progress (usually a backup still running).
>
>
> -----Original Message-----
> From: ADSM: Dist Stor Manager [mailto:ADSM-L AT VM.MARIST DOT EDU] On Behalf 
> Of
> Shawn DREW
> Sent: Thursday, February 14, 2013 11:20 AM
> To: ADSM-L AT VM.MARIST DOT EDU
> Subject: Re: [ADSM-L] Odd Migration Behaviour
>
> Do you have a migration delay setting on the disk pool by any chance?
>
>
> Regards,
> Shawn
> ________________________________
> Shawn Drew
>
>
> > -----Original Message-----
> > From: ADSM-L AT VM.MARIST DOT EDU [mailto:ADSM-L AT VM.MARIST DOT EDU]
> > Sent: Thursday, February 14, 2013 5:41 AM
> > To: ADSM-L AT VM.MARIST DOT EDU
> > Subject: [ADSM-L] Odd Migration Behaviour
> >
> > Hi
> >
> > TSM Server v6.3.1
> >
> > Some odd behaviour when migrating a disk pool to tape
> >
> > The disk pool (devicetype=disk) is 6tb in size and has approximately
> 2.5tb of
> > data in it from last nights backups.
> >
> >
> > I backup the stgpool, it copies 2.5tb to tape. Fine with this
> >
> > I mig stgpool lo=0 maxpr=3. Start fine.
> >
> > (I do not use a duration parameter)
> >
> > First migration finishes after a few minutes,  migrating 500mb. Lower
> than i
> > expected, i guess Second migration finishes after 30 minutes, migrating
> > 138gb.
> > Third migration finishes after 1.5 hours, migrating 819gb
> >
> > But the pool now shows about 24% utilised, so still about 1.5tb of data
> > remaining
> >
> >
> > The tape pool the migrations are writing to has colloocation=group
> specified.
> > I have three collocgroups, containg approximately 250 nodes. All of the
> > nodes on the server are within these 3 groups I noticed that one of the
> > clients was still backing up. It's a slow backup, always has been. That
> node is
> > in one of the collocgroups.
> > When that client backup completed, i ran migration again with lo=0 and
> it is
> > now beyond 1tb and still running. Pct utilisation of the disk pool is
> now down
> > to 10%
> >
> >
> > So, because the backup of that node is still running, will that prevent
> > migration from migrating data from that specific collocgroup while a
> backup
> > of a client within that group is in process?
> >
> > Any comments welcome.
> >
> > Regards
>
>
> This message and any attachments (the "message") is intended solely for
> the addressees and is confidential. If you receive this message in error,
> please delete it and immediately notify the sender. Any use not in accord
> with its purpose, any dissemination or disclosure, either whole or partial,
> is prohibited except formal approval. The internet can not guarantee the
> integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
> not therefore be liable for the message if modified. Please note that
> certain
> functions and services for BNP Paribas may be performed by BNP Paribas
> RCC, Inc.
>

<Prev in Thread] Current Thread [Next in Thread>