ADSM-L

Re: [ADSM-L] I'm missing something somewhere -- I need statistics on storage pool backup and can't seem to find them

2009-01-12 10:32:43
Subject: Re: [ADSM-L] I'm missing something somewhere -- I need statistics on storage pool backup and can't seem to find them
From: km <km AT GROGG DOT ORG>
To: ADSM-L AT VM.MARIST DOT EDU
Date: Mon, 12 Jan 2009 16:30:51 +0100
On 12/01, Kauffman, Tom wrote:
> > On Fri, Jan 9, 2009 at 3:45 PM, Kauffman, Tom <KauffmanT AT nibco DOT com>
> > wrote:
> >
> > You can get that pretty easily from the SUMMARY table.  You'll have
> > to play with it though, to pick out specifically which stgpool
> > process you want.
> > Start here with:
> >
> > select * from summary where activity='STGPOOL BACKUP'
> >
>
> This is close, but not what I need --
>
> I've got a severe imbalance in one of my storage pools that I'm trying to 
> document, so I can get it fixed. To do that, I need a way to break down the 
> result I get here to a count by process:
>       START_TIME: 2009-01-08 04:49:12.000000
>         END_TIME: 2009-01-08 08:27:48.000000
>         ACTIVITY: STGPOOL BACKUP
>           NUMBER: 3194
>           ENTITY: ARCH-LT4 -> ARCH-LT2-COPY
>         COMMMETH:
>          ADDRESS:
>    SCHEDULE_NAME:
>         EXAMINED: 50531
>         AFFECTED: 50531
>           FAILED: 0
>            BYTES: 876242125091
>             IDLE: 0
>           MEDIAW: 188
>        PROCESSES: 2
>       SUCCESSFUL: YES
>      VOLUME_NAME:
>       DRIVE_NAME:
>     LIBRARY_NAME:
>         LAST_USE:
>        COMM_WAIT: 0
> NUM_OFFSITE_VOLS:
>
> I'm fairly certain that nearly 600 GB of this was from one node and one 
> filesystem (a VMWare virtualcenter proxy); I just need to be able to prove 
> it, so I can get the data split up better (we're supposed to have the 
> off-site copies and matching TSM database backups done by 07:30).
>

Why not just do a Q OCC on the node before and after the STGPOOL BACKUP?
That way you will see exactly how much the node has increased its
storage.

<Prev in Thread] Current Thread [Next in Thread>